About Me
I earned my PhD from the University of Washington, where I was advised by Dr. Mari Ostendorf. During my doctoral studies, I also collaborated closely with Noah A. Smith and Hao Cheng. My research primarily focuses on dialogue systems and enhancing model inference efficiency.
Feel free to contact me if you are interested in my research or have any questions. I am more than happy to hear from you!
Email: roy.brlu [at] gmail [dot] com
You can download my resume here: pdf. (Last Update: July, 2024).
News
- Oct 2024: I join Amazon as an applied scientist to develop audio foundation models.
- July 2024: I passed my defense! Thesis title: Enhancing Transformer Models for Dialogue Summarization.
- July 2024: Our paper Does Collaborative Human–LM Dialogue Generation Help Information Extraction from Human–Human Dialogues? is accepted by CoLM!
- Mar 2024: Our paper Efficient Encoder-Decoder Transformer Decoding for Decomposable Tasks is on ArXiv!
- Feb 2024: FlashInfer supports shared prefix batch decoding! Check out the blog post: Cascade Inference: Memory Bandwidth Efficient Shared Prefix Batch Decoding.
- Oct 2023: DIALGEN code and data are released! [code] [data] [project]
- Oct 2022: Our paper Unsupervised Learning of Hierarchical Conversation Structure is accept by EMNLP 2022 Findings.
- Sep 2021: Our paper DIALKI: Knowledge Identification in Conversational Systems through Dialogue-Document Contextualization is accept by EMNLP 2021.
- Aug 2021: Our team wins the 1st prize of DialDoc Shared Task subtask 1 in the 1st DialDoc Workshop at ACL 2021.