DEEP NEURAL NETWORK (DNN)

  • The adoption of Artificial Intelligence (AI) chips have risen in recent times with chipmakers designing different types of these chips to power AI applications.
  • AI chips are built with specific architecture and have integrated AI acceleration to support deep learning-based applications.
  • Deep learning, more commonly known as Active Neural Network (ANN) or Deep Neural Network (DNN), is a subset of machine learning and comes under the broader umbrella of AI.
  • It combines a series of computer commands or algorithms that stimulate activity and brain structure.
  • DNNs go through a training phase, learning new capabilities from existing data.
  • DNNs can then inference, by applying these capabilities learned during deep learning training to make predictions against previously unseen data.
  • Deep learning can make the process of collecting, analysing, and interpreting enormous amounts of data faster and easier.
  • Chips like these, with their hardware architectures, complementary packaging, memory, storage, and interconnect solutions, make it possible for AI to be integrated into applications across a wide spectrum to turn data into information and then into knowledge.
  • Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), Central Processing Units (CPUs) and GPUs.
  • AI applications include Natural Language Processing (NLP), computer vision, robotics, and network security across a wide variety of sectors, including automotive, IT, healthcare, and retail.
  • The increasing adoption of AI chips in data centres is one of the major factors driving the growth of the market.
  • Additionally, the rise in the need for smart homes and cities, and the surge in investments in AI start-ups are expected to drive the growth of the global AI chip market.
  • The Worldwide AI chip industry accounted for approx. USD 8 billion in 2020 and is expected to reach USD 195 billion by 2030, growing at a Compound Annual Growth Rate (CAGR) of 37.4% from 2021 to 2030.

Significance:

  • Artificial intelligence applications typically require parallel computational capabilities in order to run sophisticated training models and algorithms.
  • AI hardware provides more parallel processing capability that is estimated to have up to 10 times more competing power in ANN applications compared to traditional semiconductor devices at similar price points.
  • Specialized AI hardware is estimated to allocate 4-5 times more bandwidth than traditional chips.
  • This is necessary because due to the need for parallel processing, AI applications require significantly more bandwidth between processors for efficient performance.

 SOURCE: THE HINDU,THE ECONOMIC TIMES,MINT

About ChinmayaIAS Academy - Current Affairs

Check Also

What to do with spent nuclear fuel?

Syllabus:  Alternate fuel Context: Japan has started releasing treated radioactive water from the beleaguered Fukushima …

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Free Updates to Crack the Exam!
Subscribe to our Newsletter for free daily updates