Student-Driven Machine Learning | MIT News

From their early days at MIT, and even before, Emma Liu ’22, MNG ’22, Yo-whan “John” Kim ’22, MNG ’22, and Clemente Ocejo ’21, MNG ’22 have Know they want to do computational research and explore artificial intelligence and machine learning. “I’ve been working on deep learning and working on projects since high school,” said Kim, who participated in the RSI Summer Program at MIT and Harvard and continues to use Microsoft’s Kinect.

As students in the Department of Electrical Engineering and Computer Science and freshly graduated from the Master of Engineering (MEng) thesis program, Liu, Kim, and Ocejo already have the skills to help guide application-focused projects. Working with the MIT-IBM Watson AI Lab, they improved text classification using limited labeled data and designed machine learning models to better predict product purchases over the long term. For Kim, “It’s been a very smooth transition and…a great opportunity for me to continue working in the deep learning and computer vision areas at the MIT-IBM Watson AI Lab.”

Styling video

Working with researchers in academia and industry, Kim designed, trained, and tested a deep learning model for identifying behavior across domains, in this case, video. His team specializes in training using synthetic data from generated videos and running prediction and inference tasks on real data consisting of different action categories. They wanted to see how pretrained models on synthetic videos, specifically simulations of human or humanoid movements or movements generated by game engines, would be combined with real data (publicly available videos scraped from the internet).

The reason for the study, King said, is that there may be issues with real-life videos, including representational bias, copyright and/or ethical or personal sensitivities. For example, videos of cars hitting people are difficult to collect, or using people’s videos without their consent. face, real address or license plate. Kim is conducting experiments using 2D, 2.5D and 3D video models with the goal of creating domain-specific or even large-scale general synthetic video datasets that can be used in certain data-starved transmission areas. For example, for a construction industry application, this might include running its motion recognition on a construction site. “I didn’t expect the synthetically generated video to be comparable to the real thing,” he said. “I think it opens up a lot of different roles [for the work] future. “

Although the project got off to a rocky start in collecting, generating data and running many models, King said he wouldn’t have it any other way. “It’s amazing how the lab members encouraged me: ‘It’s OK. All the experiments and fun parts will follow. Don’t stress too much.'” It’s this structure that helped Kim take charge of the work. “In the end, they gave me so much support and amazing ideas to help me complete this project.”

data label

Data scarcity is also a theme in Emma Liu’s work. “The most important problem is that there is all this data in the world, and for many machine learning problems you need to label this data,” Liu said, “but you have all this unlabeled data that you can use to solve These questions.” doesn’t really provide leverage. “

Liu, with guidance from teams at MIT and IBM, is working on leveraging this data to train semi-supervised models for text classification (and combining aspects of them), adding pseudo-labels to the unlabeled data based on the predictions and probabilities for which categories each item Previously unlabeled data is suitable. “The problem then is that previous research has shown that you can’t always trust probabilities; specifically, neural networks have been shown to be overconfident a lot of the time,” Liu noted.

Liu and her team addressed this problem by evaluating the model’s accuracy and uncertainty and recalibrating them to improve her self-training framework. Self-training and calibration steps gave her greater confidence in her predictions. This pseudo-labeled data can be added to a pool of real data, she said, thereby expanding the dataset; the process can be repeated through a series of iterations.

For Liu, her biggest gain is not the product, but the process. “I learned a lot about being an independent researcher,” she said. As an undergraduate, Liu worked with IBM to develop machine learning methods to repurpose drugs already on the market and honed her decision-making skills. After working with academic and industry researchers to gain skills in asking tough questions, finding experts, digesting and presenting relevant scientific papers, and testing ideas, Liu and her team of engineering masters working in the MIT-IBM Watson AI Lab believe , they are confident in their knowledge, freedom and flexibility to decide the direction of their own research. After taking on this key role, Liu said, “I feel like I have ownership of my project.”

Demand Forecast

After working at MIT and the MIT-IBM Watson AI Lab, Clemente Ocejo also left with a sense of mastery, starting with the MIT Undergraduate Research Opportunities Program (UROP) in artificial intelligence technology. and time series methods, where he met his MEng mentor.”You really have to be proactive in your decision-making and say it out loud,” Osejo said. [your choices] As a researcher, let people know that this is what you are doing. “

Ocejo leverages his background in traditional time series methods to work with labs applying deep learning to better predict product demand in the medical field.Here, he designed, wrote and trained a Transformer, a specific machine learning model that is Often used in natural language processing and capable of learning very long-term dependencies. Ocejo and his team compared target forecast demand across months to understand the dynamic connections and attention weights between product sales within product lines. They looked at identifier characteristics about price and amount, and account characteristics about who purchased the item or service.

“One product does not necessarily affect the prediction of another product at prediction time. It only affects the parameters that lead to the prediction during training,” Ocejo said. “Instead, we wanted to give it more of a direct impact, so we added this layer to make that connection and learn attention between all the products in the dataset.”

Over the long term, after a year of forecasting, the MIT-IBM Watson AI Lab team was able to outperform the current model; even more impressively, it did so in the short term (nearly one fiscal quarter). Oseijo attributes this to the energy of his interdisciplinary team. “A lot of people on my team aren’t necessarily experienced in deep learning, but they have extensive experience in supply chain management, operations research and optimization, which is something I don’t have with that much experience,” Ocejo said . “They provided a lot of good high-level feedback on what to tackle next…and understanding what the industry area would like to see or want to improve, so that was really helpful in clarifying my focus.”

For this work, it wasn’t the sheer volume of data that impacted Ocejo and his team, but rather the way it was structured and presented. Typically, large deep learning models require millions of data points to make meaningful inferences; however, the MIT-IBM Watson AI Lab team demonstrated that results and technology improvements can be tailored to specific applications. “This just goes to show that these models can learn useful things in the right environment, using the right architecture, without requiring too much data,” Ocejo said. “Then with the excess data, it’s only going to get better.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *