You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Heroes of Deep Learning: Andrew Ng interviews Head of Baidu Research, Yuanqing Lin
Heroes of Deep Learning: Andrew Ng interviews Head of Baidu Research, Yuanqing Lin
Yuanqing Lin, Head of Baidu Research and Head of China's National Lab on Deep Learning, discusses the founding of the national lab and its impact on the deep learning community. Lin provides insights into China's investment in deep learning and how it has led to growth across various sectors. He stresses the importance of feedback loops in AI development and how this helps create better algorithms and technologies. Lin advises individuals to establish a strong foundation in machine learning and to start with an open-source framework to enter the field successfully.
Heroes of Deep Learning: Dawn Song on AI, Deep Learning and Security
Heroes of Deep Learning: Dawn Song on AI, Deep Learning and Security
Dawn Song, an expert in deep learning and computer security, discussed her career path and her work in AI, deep learning, and security in an interview. Song emphasized the importance of identifying key problems or questions to guide one's reading when first entering the field and developing a strong foundation in representation to facilitate research in other domains. She also highlighted the growing importance of building resilient AI and machine learning systems and her work in developing defense mechanisms against black box attacks. Song shared her work on privacy and security, including training differentially private language models and developing a privacy-first cloud computing platform on blockchain at Oasis Labs. Finally, Song advised people entering new fields to be brave and not to be afraid to start from scratch.
The Revolution Of AI | Artificial Intelligence Explained | New Technologies | Robotics
The Revolution Of AI | Artificial Intelligence Explained | New Technologies | Robotics
This video explores the revolution of AI, starting with the future of autonomous vehicles and self-learning robots capable of navigating complex terrains, conducting search and rescue missions, and interacting with humans in collaborative workspaces. The development of swarm robotics shows huge potential for improving areas like farming, healthcare, and disaster response. Researchers are working on making robots more self-aware and able to communicate through natural language processing, creating hyper-realistic digital avatars, and more human-like androids, which could serve as holographic assistants or companions for the elderly and socially isolated. While the benefits of AI in improving society are immense, there is also a need for ethical considerations and accountability for the developers to ensure AI's alignment with positive intentions.
Deep-dive into the AI Hardware of ChatGPT
Deep-dive into the AI Hardware of ChatGPT
What hardware was used to train ChatGPT and what does it take to keep it running? In this video we will take a look at the AI hardware behind ChatGPT and figure out how Microsoft & OpenAI use machine learning and Nvidia GPUs to create advanced neural networks.
The video discusses the hardware used for training and inference in ChatGPT, a natural text-based chat conversation AI model. Microsoft's AI supercomputer was built with over 10,000 Nvidia V100 GPUs and 285,000 CPU cores for GPT-3's training, which also contributed to the creation of ChatGPT. ChatGPT was probably fine-tuned on Azure infrastructure, using 4,480 Nvidia A100 GPUs and over 70,000 CPU cores for training. For inference, ChatGPT is likely running on a single Nvidia DGX or HGX A100 instance on Microsoft Azure servers. The video also mentions the cost of running ChatGPT at scale and the potential impact of new AI hardware like neural processing units and AI engines.
Nvidia CEO Jensen Huang On How His Big Bet On A.I. Is Finally Paying Off - Full Interview
Nvidia CEO Jensen Huang On How His Big Bet On A.I. Is Finally Paying Off - Full InterviewNvidia CEO Jensen Huang highlights the company's history of agility and reinvention, emphasizing its willingness to take big bets and forget past mistakes to remain relevant in the fast-moving tech industry. Nvidia's ambition was always to be a computing platform company, and its mission to create more general-purpose accelerated computing led to its success in artificial intelligence. Huang also discusses the democratization of AI technology and its potential impact on small startups and various industries. He encourages people to take advantage of AI to increase their productivity and highlights Nvidia's unique approach to providing versatile and performant general-purpose accelerated computing platforms. Finally, Huang discusses the importance of resilience, diversity, and redundancy in the manufacturing industry, and the company's next big reinvention in AI meeting the physical world through the creation of Omniverse.
OpenAI CEO Sam Altman | AI for the Next Era
OpenAI CEO Sam Altman | AI for the Next Era
OpenAI CEO Sam Altman discusses the potential for artificial intelligence to improve language models, multimodal models, and machine learning, as well as its potential impact on financial markets. He also predicts that the field will remain competitive, with new applications appearing regularly.
important part of life.
DeepMind's Demis Hassabis on the future of AI | The TED Interview
DeepMind's Demis Hassabis on the future of AI | The TED Interview
In the TED interview, Demis Hassabis discusses the future of artificial intelligence and how it will lead to greater creativity. He argues that games are an ideal training ground for artificial intelligence, and that chess should be taught in schools as part of a broader curriculum that includes courses on game design.
Future of Artificial Intelligence (2030 - 10,000 A.D.+)
Future of Artificial Intelligence (2030 - 10,000 A.D.+)
The video predicts that AI technology will continue to grow and evolve, leading to the emergence of SuperIntelligence and robots with human-level consciousness in the next few decades. Virtual beings with self-awareness and emotions will be common, and humanoid robots will become so advanced that they can blend in with humans seamlessly. There will be opposition groups fighting for the rights of conscious virtual beings, while humans merge with AIs to make a century's worth of intellectual progress in just one hour. The most evolved Super-Intelligences will be able to create humanoids that can morph into any person and fly in mid-air, while conscious robot probes comprised of self-replicating nanobots will be sent to other galaxies through wormholes. In the future, humans and AI hybrids will transcend into higher dimensions, resembling deities of the past.
Let's build GPT: from scratch, in code, spelled out
Let's build GPT: from scratch, in code, spelled out
We build a Generatively Pretrained Transformer (GPT), following the paper "Attention is All You Need" and OpenAI's GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!) . I recommend people watch the earlier makemore videos to get comfortable with the autoregressive language modeling framework and basics of tensors and PyTorch nn, which we take for granted in this video.
This video introduces the GPT algorithm and shows how to build it from scratch using code. The algorithm is used to predict the next character in a text sequence, and is implemented as a PyTorch module. The video covers how to set up the model, how to train it, and how to evaluate the results.
This video demonstrates how to build a self-attention module in code. The module uses a linear layer of interaction to keep track of the attention of a single individual head. The self-attention module is implemented as a tabular matrix, which masks out the weight of each column and then normalizes it to create data-dependent affinities between tokens.
MIT 6.801 Machine Vision, Fall 2020. Lecture 1: Introduction to Machine Vision
Lecture 1: Introduction to Machine Vision
The lecture "Introduction to Machine Vision" provides a thorough overview of the course logistics and objectives, with emphasis on the physics-based approach to image analysis. It covers machine vision components, ill-posed problems, surface orientation, and the challenges of image processing. The lecturer also introduces the least squares optimization method and the pinhole model used in cameras. The camera-centric coordinate system, optical axis, and the use of vectors are also briefly discussed. The course aims to prepare students for more advanced machine vision courses and real applications of math and physics in programming.
The speaker also discusses various concepts related to image formation, including vector notation for perspective projection, surface illumination, foreshortening of surface elements, and how 3D vision problems can be solved using 2D images. The lecturer explains how the illumination on a surface varies with the incident angle and the cosine relationship between the red length and the surface length, which can be used to measure the brightness of different parts of a surface. However, determining the orientation of every little facet of an object can be difficult due to two unknowns. The speaker also explains the reason why we can solve a 3D vision problem using 2D images and concludes by mentioning that the math for tomography is simple, but the equations are complicated, making it challenging to perform inversions.