Tag Archives: Microsoft Research

Anticipating More from Cortana – A Look At : The Future of The Windows Phone

Microsoft Research – April 17, 2014 

 

Most of us can only dream of having the perfect personal assistant, one who is always there when needed, anticipating our every request and unobtrusively organizing our lives. Cortana, the new digital personal assistant powered by Bing that comes with Windows Phone 8.1, brings users closer to that dream.

Image

 

For Larry Heck, a distinguished engineer in Microsoft Research, this first release offers a taste of what he has in mind. Over time, Heck wants Cortana to interact in an increasingly anticipatory, natural manner.

Cortana already offers some of this behavior. Rather than just performing voice-activated commands, Cortana continually learns about its user and becomes increasingly personalized, with the goal of proactively carrying out the right tasks at the right time. If its user asks about outside temperatures every afternoon before leaving the office, Cortana will learn to offer that information without being asked.

Furthermore, if given permission to access phone data, Cortana can read calendars, contacts, and email to improve its knowledge of context and connections. Heck, who plays classical trumpet in a local orchestra, might receive a calendar update about a change in rehearsal time. Cortana would let him know about the change and alert him if the new time conflicts with another appointment.

Research Depth and Breadth an Advantage

While many people would categorize such logical associations and humanlike behaviors under the term ”artificial intelligence” (AI), Heck points to the diversity of research areas that have contributed to Cortana’s underlying technologies. He views Cortana as a specific expression of Microsoft Research’s work on different areas of personal-assistant technology.

“The base technologies for a virtual personal assistant include speech recognition, semantic/natural language processing, dialogue modeling between human and machines, and spoken-language generation,” he says. “Each area has in it a number of research problems that Microsoft Research has addressed over the years. In fact, we’ve pioneered efforts in each of those areas.”

The Cortana user interface
The Cortana user interface.

Cortana’s design philosophy is therefore entrenched in state-of-the-art machine-learning and data-mining algorithms. Furthermore both developers and researchers are able to use Microsoft’s broad assets across commercial and enterprise products, including strong ties to Bing web search and Microsoft speech algorithms and data.

If Heck has set the bar high for Cortana’s future, it’s because of the deep, varied expertise within Microsoft Research.

“Microsoft Research has a long and broad history in AI,” he says. “There are leading scientists and pioneers in the AI field who work here. The underlying vision for this work and where it can go was derived from Eric Horvitz’s work on conversational interactions and understanding, which go as far back as the early ’90s. Speech and natural language processing are research areas of long standing, and so is machine learning. Plus, Microsoft Research is a leader in deep-learning and deep-neural-network research.”

From Foundational Technology to Overall Experience

In 2009, Heck started what was then called the conversational-understanding (CU) personal-assistant effort at Microsoft.

“I was in the Bing research-and-development team reporting to Satya Nadella,” Heck says, “working on a technology vision for virtual personal assistants. Steve Ballmer had recently tapped Zig Serafin to unify Microsoft’s various speech efforts across the company, and Zig reached out to me to join the team as chief scientist. In this role and working with Zig, we began to detail out a plan to build what is now called Cortana.”

Researchers who made contributions to Cortana
Researchers who worked on the Cortana product (from left): top row, Malcolm Slaney, Lisa Stifelman, and Larry Heck; bottom row, Gokhan Tur, Dilek Hakkani-Tür, and Andreas Stolcke.

Heck and Serafin established the vision, mission, and long-range plan for Microsoft’s digital-personal-assistant technology, based on scaling conversations to the breadth of the web, and they built a team with the expertise to create the initial prototypes for Cortana. As the effort got off the ground, Heck’s team hired and trained several Ph.D.-level engineers for the product team to develop the work.

“Because the combination of search and speech skills is unique,” Heck says, “we needed to make sure that Microsoft had the right people with the right combination of skills to deliver, and we hired the best to do it.”

After the team was in place, Heck and his colleagues joined Microsoft Research to continue to think long-term, working on next-generation personal-assistant technology.

Some of the key researchers in these early efforts included Microsoft Research senior researchers Dilek Hakkani-Tür and Gokhan Tur, and principal researcher Andreas Stolcke. Other early members of Heck’s team included principal research software developer Madhu Chinthakunta, and principal user-experience designer Lisa Stifelman.

“We started out working on the low-level, foundational technology,” Heck recalls. “Then, near the end of the project, our team was doing high-level, all-encompassing usability studies that provided guidance to the product group. It was kind of like climbing up to the crow’s nest of a ship to look over the entire experience.

“Research manager Geoff Zweig led usability studies in Microsoft Research. He brought people in, had them try out the prototype, and just let them go at it. Then we would learn from that. Microsoft Research was in a good position to study usability, because we understood the base technology as well as the long-term vision and how things should work.”

The Long-Term View

Heck has been integral to Cortana since its inception, but even before coming to Microsoft in 2009, he already had contributed to early research on CU personal assistants. While at SRI International in the 1990s, his tenure included some of the earliest work on deep-learning and deep-neural-network technology.

Heck was also part of an SRI team whose efforts laid the groundwork for the CALO AI project funded by the U.S. government’s Defense Advanced Research Projects Agency. The project aimed to build a new generation of cognitive assistants that could learn from experience and reason intelligently under ambiguous circumstances. Later roles at Nuance Communications and Yahoo! added expertise in research areas vital to contributing to making Cortana robust.

The Cortana notebook menu
The notebook menu for Cortana.

Not surprisingly, Heck’s perspectives extend to a distant horizon.

“I believe the personal-assistant technology that’s out there right now is comparable to the early days of search,” he says, “in the sense that we still need to grow the breadth of domains that digital personal assistants can cover. In the mid-’90s, before search, there was the Yahoo! directory. It organized information, it was popular, but as the web grew, the directory model became unwieldy. That’s where search came in, and now you can search for anything that’s on the web.”

He sees personal-assistant technology traveling along a similar trajectory. Current implementations target the most common functions, such as reminders and calendars, but as technology matures, the personal assistant has to extend to other domains so that users can get any information and conduct any transaction anytime and anywhere.

“Microsoft has intentionally built Cortana to scale out to all the different domains,” Heck says. “Having a long-term vision means we have a long-term architecture. The goal is to support all types of human interaction—whether it’s speech, text, or gestures—across domains of information and function and make it as easy as a natural conversation.”

Advertisements

How Microsoft’s Research Team is making Testing and the use of Pex & Moles Fun and Interesting

Try it out on the web

Go to www.pex4fun.com, and click Learn to start tutorials.

Main Publication to cite

Nikolai Tillmann, Jonathan De Halleux, Tao Xie, Sumit Gulwani, and Judith Bishop, Teaching and Learning Programming and Software Engineering via Interactive Gaming, in Proc. 35th International Conference on Software Engineering (ICSE 2013), Software Engineering Education (SEE), May 2013

 

Massive Open Online Courses (MOOCs) have recently gained high popularity among various universities and even in global societies. A critical factor for their success in teaching and learning effectiveness is assignment grading. Traditional ways of assignment grading are not scalable and do not give timely or interactive feedback to students.

 

To address these issues, we present an interactive-gaming-based teaching and learning platform called Pex4Fun. Pex4Fun is a browser-based teaching and learning environment targeting teachers and students for introductory to advanced programming or software engineering courses. At the core of the platform is an automated grading engine based on symbolic execution.

 

In Pex4Fun, teachers can create virtual classrooms, customize existing courses, and publish new learning material including learning games. Pex4Fun was released to the public in June 2010 and since then the number of attempts made by users to solve games has reached over one million.

 

Our work on Pex4Fun illustrates that a sophisticated software engineering technique – automated test generation – can be successfully used to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users.

 

 

Code Hunt is an educational coding game.

Play win levels, earn points!

Analyze with the capture code button

Code Hunt is a game! The player, the code hunter, has to discover missing code fragments. The player wins points for each level won with extra bonus for elegant solutions.

Code in Java or C#

Discover a code fragment

Play in Java or C#… or in both! Code Hunt allows you to play in those two curly-brace languages. Code Hunt provides a rich editing experience with syntax coloring, squiggles, search and keyboard shortcuts.

Learn algorithms

Discover a code fragment

As players progresses the sectors, they learn about arithmetic operators, conditional statements, loops, strings, search algorithms and more. Code Hunt is a great tool to build or sharpen your algorithm skills. Starting from simple problems, Code Hunt provides fun for the most skilled coders.

Graded for correctness and quality

Modify the code to match the code fragment

At the core of the game experience is an automated grading engine based on dynamic symbolic execution. The grading engine automatically analyzes the user code and the secret code to generate the result table.

MOOCs with Office Mix

Add Code Hunt to your presentations

Code Hunt can included in any PowerPoint presentation and publish as an Office Mix Online Lesson. Use this PowerPoint template to create Code Hunt-themed presentations.

Web no installs, it just works

It just works

Code Hunt runs in most modern browsers including Internet Explorer 10, 11 and recent versions of Chrome and Firefox. Yup, it works on iPad.

Extras play your own levels

Play your own levels

Extra Zones with new sectors and levels can be created and reused. Read designer usage manual to create your own zone.

Compete so you think you can code

Compete

Code Hunt can be used to run small scale or large scale, private or public, coding competition. Each competition gets its own set of sectors and levels and its own leaderboard to determine the outcome. Please contact codehunt@microsoft.com for more information about running your own competition using Code Hunt.

Credits the team

Capture the working code fragment

Code Hunt was developed by the Research in Software Engineering (RiSE) group and Connections group at Microsoft Research. Go to our Microsoft Research page to find a list of publications around Code Hunt.

Microsoft Research tachles Ecosystem Modelling Rate

Peter Lee, the head of Microsoft Research shared some  highlights of the organization, in a recent interview with Scientific American:

Microsoft Research has

  • 1,100 Researchers
  • 13 Laboratories around the world
    • with a 14th opening soon in Brazil

To put it in perspective, Microsoft has

Making Microsoft Research about 1% of the organization.

In order to keep with the mission of:

“promoting open publication of all research results
and encouraging deep collaborations with academic researchers.”

 

Microsoft Research crafted the following

Open Access Policy

  • Retention of Rights:
    Microsoft Research retains a license to make our Works available to the research community in our online Microsoft Research open-access repository. 
  • Authorization to enter into publisher agreements
    Microsoft researchers are authorized to enter into standard publication agreements with Publishers on behalf of Microsoft,  subject to the rights retained by Microsoft as per the previous paragraph.
  • Deposit
    Microsoft Research will endeavor to make every Microsoft Research-authored article available to the public in an open-access repository.

The Open Access Policy introduction states:

“Microsoft Research is committed to disseminating the fruits of its research and scholarship as widely as possible because we recognize the benefits that accrue to scholarly enterprises from such wide dissemination, including more thorough review, consideration and critique, and general increase in scientific, scholarly and critical knowledge.

 

In adition to adopting this policy, Microsoft Research also:

“…encourage researchers with whom we collaborate, and to whom we provide support, to embrace open access policies, and we will respect the policies enacted by their institutions.”

The MSDN blog closes with perspective on the ongoing changes in the structure of scientific publishing:

We are undoubtedly in the midst of a transition in academic publishing—a transition affecting publishers, institutions, librarians and curators, government agencies, corporations, and certainly researchers—in their roles both as authors and consumers. We know that there remain nuances to be understood and adjustments to be made, but we are excited and optimistic about the impact that open access will have on scientific discovery.

 

The MSDN blog was authored by

  • Jim Pinkelman, Senior Director, Microsoft Research Connections, and
  • Alex Wade, Director for Scholarly Communication, Microsoft Research

 

Microsoft Research Tackles Ecosystem Modeling Rate This Josh Henretig 17 Jan 2013 8:03 AM Comments 0 What if there was a giant computer model that could dramatically enhance our understanding of the environment and lead to policy decisions that better support conservation and biodiversity?

 

A team of researchers at Microsoft Research are building just such a model that one day may eventually do just that, and have published an article today in Nature (paid access) arguing for other scientists to get on board and try doing the same. When Drew Purves, head of Microsoft’s Computational Ecology and Environmental Science Group (CEES) and his colleagues at Microsoft Research in Cambridge, United Kingdom, began working with the United Nations Environment Programme World Conservation Monitoring Center (UNEP-WCMC), they didn’t know they would end up modeling life at global scales.

 

“UNEP-WCMC is an international hub of important conservation activity, and we were pretty open-minded about exactly what we might do together,” says Purves. But they quickly realized that what was really needed was a general ecosystem model (GEM) – something that hasn’t been possible to date because of the vast scale involved. In turn, findings from a GEM could contribute to better informed policy decisions about biodiversity. But first, a primer on terminology. A GCM (general circulation model) is a mathematical model that mimics the physics and chemistry of the planet’s land, ocean and atmosphere. While scientists use these models to better understand how the earth’s climate systems work, they are also used to make predictions about climate change and inform public policy. Because these models have been so successful, members of the conservation community are looking for a model that could improve their understanding of biodiversity.

 

Building a GEM is challenging—but not impossible. Microsoft Research and the UNEP-WCMC have spent the past two years developing a prototype GEM for terrestrial and marine ecosystems. The prototype is dubbed the Madingley Model, and is built on top of another hugely ambitious project that the group just finished, modeling the global carbon cycle. With this as starting point, they set out to model all animal life too: herbivores, omnivores, and carnivores, of all sizes, on land in the sea. The Computational Ecology group were in a unique position to do this, because the group includes actual ecologists (like Purves), doing novel research within Microsoft Research itself. In addition, they’re developing novel software tools for doing this kind of science. That has helped the team as it’s come up against all kinds of computational and technical challenges.

 

Nonetheless, the model’s outputs have been widely consistent with current understandings of ecosystems. One challenge is that while some of the data needed to create an effective GEM has already been collected and is stored away in research institutions, more data is needed. A new major data-gathering program would be expensive, so supporters of GEMs are calling on governments around the world to support programs that manage large-scale collection of ecological and climate data. But if you build it, will they come?

 

Drew Purves knows building a realistic GEM is possible, but he believes the real challenge is constructing a model that will enable policy makers to manage our natural resources better – and that means making sure the predictions are accurate. If such an accurate, trustworthy model can be achieved, one day conservationists will be able to couple data from GEMs and models from other fields to provide a more comprehensive guide to global conservation policy. Finding solutions to climate change and ecosystem preservation is too big of a challenge for any one entity to tackle in isolation.

 

And that’s exactly why we think that computer modeling has potential. It’s another great example of the continually evolving role that technology will play in addressing the environmental challenges facing the planet—and we’re honored to be working hand in hand with the United Nations Environment Programme to begin solving those challenges. –

See more at: http://blogs.msdn.com/b/microsoft-green/archive/2013/01/17/microsoft-research-tackles-ecosystem-modeling.aspx#sthash.okxV6we1.dpuf