Ting-Yao 'Edward' Hsu (許庭耀)
School of Electrical Engineering and Computer Science
Penn State University
Hello! I am a third-year Ph.D. candidate at PennState University working on v2l & figure captioning + NLP + HCI. I am fortunate to be advised by Dr. Ting-hao (Kenneth) Huang and Dr. C. Lee Giles. Prior to Penn State, I received B.S. in CS from National Tsing Hua University. Before Ph.D., I spent great time working with Dr. Shang-Hong Lai in CVLAB and Dr. Yuan-Hao Chang (Johnson Chang) in Academia Sinica.
My research lies in the intersection of vision-and-language and HCI. I am particularly interested in multimodal learning, specifically in vision and language tasks. Currently focusing on vision-to-language generation. My goal is to bridge the gap between vision and language and build AI systems that can be applied on social media platforms, writing/reading support, and increase accessibility for people. I am also interested in language-related tasks, such as data-to-text generation and QA summarization.
|Our paper, Summaries as Captions: Generating Figure Captions for Scientific Documents with Automated Text Summarization is accepted by INLG 2023! See you in Prague.
|I’ll attend ACL 2023 in person @Toronto during July 9-14. I’m excited to meet my old and new NLP friends again, and feel free to DM if you’d like to chat!
|We’re launching the 1st Scientific Figure Captioning (SciCap) Challenge ! We invite AI/NLP/CV researchers to build systems that caption all types of figures in arXiv papers. The challenge will be hosted at the CLVL workshop at #ICCV2023.