Our workshop on Semantic Interpretation in an Actionable Context (SIAC) has been accepted as a full day workshop in the coming NAACL conference.
We are currently soliciting submissions for the workshop, for more details see the workshop's website.
Professor Dan Roth was named a 2011 ACM Fellow in recognition of his “contributions to machine learning and natural language processing.” This program from the Association for Computing Machinery seeks to honor outstanding members for their achievements in computer science and information technology. For more information about Professor Roth’s work leading to this honor, see this story.
Five students in the Cognitive Computation Group successfully defended their Ph.D. dissertations this summer and early fall. Ming-Wei Chang’s thesis title was “Structured Prediction with Indirect Supervision”; he now has a job with Microsoft. Mike Connor’s thesis title was “Minimal Supervision for Language Learning: Bootstrapping Global Patterns from Local Knowledge.” Nick Rizzolo’s thesis title was “Learning Based Programming”; he now has a job with Google in California. Lev Ratinov’s thesis title was “Encyclopedia Knowledge in NLP”; he now has a job with Google in New York. Jeff Pasternack’s thesis title was “Knowing Who to Trust and What to Believe in the Presence of Conflicting Information.” Congratulations to all of our recent graduates, and best of luck in your future pursuits!
A team from the Cognitive Computation Group placed first in the Helping Our Own text correction shared task. Group members Alla Rozovskaya, Mark Sammons, Josh Gioja and Dan Roth designed a system that focused on common errors in text written by non-native English speakers, including article, preposition, punctuation, and word choice mistakes. The system ranked first among six teams from around the world in all three evaluation metrics: Detection, Recognition and Correction.
Gourab Kundu and Dan Roth’s paper “Adapting Text Instead of the Model: An Open Domain Approach” won the Best Student Paper Award at the 2011 Conference on Computational Natural Language Learning (CoNLL), held in Portland, OR, June 23-24. In this paper, they show that while most adaptation algorithms in the literature require costly retraining of models when used in new domains, good results can be obtained by transforming the target text “on the fly” to make it more similar to the original training domain.
The Cognitive Computation Group’s Illinois Coreference Package performed well in the 2011 Conference on Computational Natural Language Learning (CoNLL) shared task: Modeling Unrestricted Coreference in OntoNotes. Competing in the closed mode, the CCG team’s system ranked first in two out of four scoring metrics (B3 and BLANC), and ranked third in average score. Team members Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Nick Rizzolo, Dan Roth and Mark Sammons were pleased with their system’s performance in this important competition.
Kai-Wei Chang was recently named one of two winners of the prestigious Yahoo! 2011 Key Scientific Challenges Program award in the area of Machine Learning. This award, designed to support research leading to the next generation of Internet technology, provides recipients with $5,000 of unrestricted research funding to use for conference travel and lab materials, along with exclusive access to Yahoo! resources and scientists. Winners are also invited to attend this summer’s Key Scientific Challenges Graduate Student Summit to present their work.
Professor Dan Roth recently received a Faculty Research Award from Google to support work on Information Trustworthiness. Given the large amount of information available on-line today, users often need help figuring out which sources are reliable. Professor Roth and other members of the Cognitive Computation Group are working on developing automated systems for evaluating trustworthiness. For more information, see this story and the CCG project page on Trustworthiness.
Jeff Pasternack and Dan Roth’s paper “Comprehensive Trust Metrics for Information Networks” won best paper honors in the Network Science category at the 27th Army Science Conference, held November 29-December 2, 2010 in Orlando, FL. They introduce three new metrics for measuring the trustworthiness of information sources: truthfulness, completeness, and bias, and show that these are able to convey a more useful and robust idea of how much (and in what way) an information source should be trusted than the current practice. For more information, see this story.