shape

Geoffrey Hinton, a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, has recently announced his departure from Google. He leaves the tech giant to focus on raising public awareness of the serious risks he believes may accompany the technology he helped create, particularly large language models like GPT-4.

Hinton’s work on backpropagation, a technique proposed in the 1980s, allows machines to learn and underpins almost all neural networks today. The power of neural networks trained via backpropagation began to make an impact in the 2010s, leading to the development of large language models.

 

Hinton’s concerns about AI stem from the capabilities of these large language models. Although they have significantly fewer connections than the human brain, they possess an impressive amount of knowledge. The rapid learning capabilities and efficient communication among neural networks are what make Hinton believe that we might be dealing with a new and better form of intelligence.

Despite the potential benefits of this new form of intelligence, Hinton is concerned about the possible misuse of AI by bad actors for malicious purposes, such as manipulating elections or waging wars. He also worries about machines figuring out ways to manipulate or harm humans who are unprepared for this new technology.

Do You Like This Article So Far?

Signup To Our Newsletter to be Notified About Another One.

Subscription Form

While some experts, like Meta’s chief AI scientist Yann LeCun, have a more optimistic outlook, others like Yoshua Bengio, scientific director of the Montreal Institute for Learning Algorithms, urge caution and rational debate about the risks involved.

Hinton’s departure from Google has ignited conversations about the need for global cooperation and agreement on the risks and regulations surrounding AI. He believes that an international ban, similar to the one on chemical weapons, might be a possible model for curbing the development and use of dangerous AI.

 

 

Do You Like This Article So Far?

Signup To Our Newsletter to be Notified About Another One.

Subscription Form

What we believe: AI is a great tool that everyone needs to learn to use. However, just as the police use fast cars to apprehend bank robbers in their getaway vehicles, we need to police the way we use AI. The potential risks should not scare us away from embracing this technology, but rather encourage us to establish guidelines and regulations that ensure its safe and ethical use.

 

Do You Like This Article So Far?

Signup To Our Newsletter to be Notified About Another One.

Subscription Form

6 Comments

Leave a Reply

Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare