menu
Jatin Sharma
Philadelphia, Pennsylvania, United States
bio
I am a graduate student in CIS-Embedded Systems at University of Pennsylvania. I worked as a Research Developer at Multilingual Systems & Machine Learning group, Microsoft Research, Bangalore. My research work was primary focused on computational and empirical methodologies in mix language computer mediated communications analysing the reasons behind code alternation (i.e. code-mixing, code-switching, code-congruence) and developing machine learning models to normalize or back-transliterate the text. This work directly relates to personal digital assistants such as siri, google play and cortana. I also enjoyed prototyping basic brain machine interfaces that control living room devices and represent user's cognitive state (mood) using light on a crystal ball. At Modern Innovations Group (MIG), I experimented with vision, gestures, speech and touch which led to some excellent products and prototypes. Some of the projects include converting any wall into touch screen, intelligent meeting room assistant, wifi based indoor positioning and navigation system, seamless content share using smartphone, human-like personal robot and health monitoring tshirt. I am a Young India Fellow. During the fellowship, I worked with Prof. Rahul Mangharam (Director, mLab) SEAS-University of Pennsylvania for around 8.5 months as a project fellow. We developed 'viSparsh', a haptic belt for the visually impaired that aids them in navigation. 'viSparsh' was acknowledged as one among the top 12 innovations by Wall Street Journal [WSJ-Asian Innovation Awards'12] and received several international accolades.
bio
I am a graduate student in CIS-Embedded Systems at University of Pennsylvania. I worked as a Research Developer at Multilingual Systems & Machine Learning group, Microsoft Research, Bangalore. My research work was primary focused on computational and empirical methodologies in mix language computer mediated communications analysing the reasons behind code alternation (i.e. code-mixing, code-switching, code-congruence) and developing machine learning models to normalize or back-transliterate the text. This work directly relates to personal digital assistants such as siri, google play and cortana. I also enjoyed prototyping basic brain machine interfaces that control living room devices and represent user's cognitive state (mood) using light on a crystal ball. At Modern Innovations Group (MIG), I experimented with vision, gestures, speech and touch which led to some excellent products and prototypes. Some of the projects include converting any wall into touch screen, intelligent meeting room assistant, wifi based indoor positioning and navigation system, seamless content share using smartphone, human-like personal robot and health monitoring tshirt. I am a Young India Fellow. During the fellowship, I worked with Prof. Rahul Mangharam (Director, mLab) SEAS-University of Pennsylvania for around 8.5 months as a project fellow. We developed 'viSparsh', a haptic belt for the visually impaired that aids them in navigation. 'viSparsh' was acknowledged as one among the top 12 innovations by Wall Street Journal [WSJ-Asian Innovation Awards'12] and received several international accolades.