DARPA announces $2B investment in AI

At a symposium in Washington DC on Friday, DARPA  href="https://ift.tt/2NYgqa7" target="_blank" rel="noopener">announced plans to invest $2 billion in artificial intelligence research over the next five years.

In a program called “AI Next,” the agency now has over 20 programs currently in the works and will focus on “enhancing the security and resiliency of machine learning and AI technologies, reducing power, data, performance inefficiencies and [exploring] ‘explainability'” of these systems.

“Machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible,” said director Dr. Steven Walker. “We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”

Artificial intelligence is a broad term that can encompass everything from intuitive search features to true machine learning, and all definitions rely heavily on consuming data to inform their algorithms and “learn.” DARPA has a long history of research and development in this space, but has recently seen its efforts surpassed by foreign powers like China, who earlier this summer announced plans to become an AI leader by 2030.

In many cases these AI are still in their infancy, but the technology — especially machine learning — has the potential to completely transform not only how users interact with their own technology but how corporate and governmental institutions use this technology to interact with their employees and citizens.

One particular concern with machine learning is the potential bias that can be incorporated into these systems as a result of the data they consume during training. If the data contains holes or misinformation, the machines can come to incorrect conclusions — such as which individuals are “more likely” to commit crimes — that can have devastating consequences. And, even more frighteningly, when organically coming to these conclusions the “learning” a machine is obscured in something called a black box.

In other words, even the researchers who design the algorithms can’t quite know how machines are reaching their conclusions.

That said, when handled with care and forethought, AI research can be a powerful source of innovation and advancement as well. As DARPA moves forward with its research, we will see how they handle these important technical and societal questions.



from www.tech-life.in
Share:

No comments:

Post a Comment

Search This Blog

Blog Archive

Powered by Blogger.

Edo raises $12M from Breyer Capital to measure TV ad effectiveness

Edo , an ad analytics startup founded by Daniel Nadler and actor Edward Norton, announced today that it has raised $12 million in Series A f...

Blog Archive

Recent Posts

Unordered List

  • Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
  • Aliquam tincidunt mauris eu risus.
  • Vestibulum auctor dapibus neque.

Sample Text

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation test link ullamco laboris nisi ut aliquip ex ea commodo consequat.

Pages

Theme Support

Need our help to upload or customize this blogger template? Contact me with details about the theme customization you need.