You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
CONFERENCE JENSEN HUANG (NVIDIA) and ILYA SUTSKEVER (OPEN AI).AI TODAY AND VISION OF THE FUTURE
CONFERENCE JENSEN HUANG (NVIDIA) and ILYA SUTSKEVER (OPEN AI).AI TODAY AND VISION OF THE FUTURE
The CEO of NVIDIA, Jensen Huang, and the co-founder of OpenAI, Ilya Sutskever, discuss the origins and advancements of artificial intelligence (AI) in a conference. Sutskever explains how deep learning became clear to him, how unsupervised learning through compression led to the discovery of a neuron that corresponded to sentiment, and how pre-training a neural network led to instructing and refining with human and AI collaboration. They also discuss the advancements and limitations of GPT-4 and multi-modality learning, as well as the role of synthetic data generation and improving the reliability of AI systems. Despite being the same concept from 20 years ago, they both marvel at the progress made in AI research.
It’s Time to Pay Attention to A.I. (ChatGPT and Beyond)
It’s Time to Pay Attention to A.I. (ChatGPT and Beyond)
The video discusses the development of artificial intelligence (AI), and how it is changing the way we work and live. Some people are excited about the potential of AI, while others are worried about its potential implications. The speaker also provides a brief summary of a recent podcast episode.
The Inside Story of ChatGPT’s Astonishing Potential | Greg Brockman | TED
The Inside Story of ChatGPT’s Astonishing Potential | Greg Brockman | TED
In this section of the video, Greg Brockman discusses the role of AI in improving education. He argues that traditional education methods are often inefficient and ineffective, with students struggling to retain knowledge and teachers struggling to teach in a way that engages every student. Brockman suggests that AI could help to solve these problems by providing personalized learning experiences for each student. With AI tools, it is possible to monitor student progress in real-time, adjusting the curriculum to their needs and preferences. This could lead to more engaging and efficient learning experiences, allowing students to retain more knowledge and teachers to focus on more important tasks. Brockman also emphasizes the importance of designing AI tools with privacy in mind, ensuring that student data is protected and used only for educational purposes.
MIT Deep Learning in Life Sciences - Spring 2021
MIT Deep Learning in Life Sciences - Spring 2021
The "Deep Learning in Life Sciences" course applies machine learning to various life sciences tasks and is taught by a researcher in machine learning and genomics with a teaching staff of PhD students and undergraduates from MIT. The course covers machine learning foundations, gene regulatory circuitry, variation in disease, protein interactions and folding, and imaging using TensorFlow through Python in a Google Cloud platform. The course will consist of four problem sets, a quiz, and a team project, with mentoring sessions interspersed to aid students in designing their own projects. The instructor emphasizes the importance of building a team with complementary skills and interests and provides various milestones and deliverables throughout the term. The course aims to provide real-world experience, including grant and fellowship proposal writing, peer review, yearly reports, and developing communication and collaboration skills. The speaker discusses the differences between traditional AI and deep learning, which builds an internal representation of a scene based on observable stimuli, and emphasizes the importance of deep learning in the life sciences due to the convergence of training data, compute power, and new algorithms.
The video is an introductory lecture on deep learning in life sciences, explaining the importance of machine learning and deep learning in the exploration of the complexity of the world. The talk focuses on the concept of Bayesian inference and how it plays a crucial role in classical and deep machine learning along with the differences between generative and discriminative approaches to learning. The lecture also highlights the power of support vector machines, classification performance, and linear algebra for understanding networks across biological systems. The speaker notes that the course will cover various topics in deep learning, including regularization, avoiding overfitting, and training sets. The lecture concludes by addressing questions related to the interpretability of artificial neurons and deep networks for future lectures.
Machine Learning Foundations - Lecture 02 (Spring 2021)
Machine Learning Foundations - Deep Learning in Life Sciences Lecture 02 (Spring 2021)
This lecture covers the foundations of machine learning, introducing concepts such as the training and test sets, types of models such as discriminative and generative, evaluating loss functions, regularization and overfitting, and neural networks. The lecturer goes on to explain the importance of hyperparameters, evaluating accuracy in life sciences, correlation testing, and probability calculations for model testing. Finally, the basics of deep neural networks and the structure of a neuron are discussed, highlighting the role of non-linearity in learning complex functions.
In the second section of the lecture, the concept of activation functions in deep learning is explained, as well as the learning process of adjusting weights to match the output function using partial derivatives in tuning weight updates to minimize errors, which is the foundation of gradient-based learning. The concept of backpropagation is introduced as a method for propagating derivatives through a neural network in order to adjust weights. The various methods for optimizing weights in multiple layers of deep learning models are discussed, including stochastic gradient descent and the concept of model capacity and the VC dimension. The effectiveness of a model's capacity on a graph and the bias and variance are also discussed, along with various regularization techniques such as early stopping and weight decay. The importance of finding the right balance of complexity is emphasized, and students are encouraged to introduce themselves to their classmates positively.
CNNs Convolutional Neural Networks - Lecture 03 (Spring 2021)
CNNs Convolutional Neural Networks - Deep Learning in Life Sciences - Lecture 03 (Spring 2021)
This video lecture covers the topic of convolutional neural networks (CNNs) in deep learning for life sciences. The speaker discusses the principles of the visual cortex and how they relate to CNNs, including the building blocks of the human and animal visual systems, such as the basic building blocks of summing and weighing and the bias activation threshold of a neuron. They explain that CNNs use specialized neurons for low-level detection operations and layers of hidden units for abstract concept learning. The lecture also covers the role of convolution and pooling layers, the use of multiple filters for extracting multiple features, and the concept of transfer learning. Finally, non-linearities and the use of padding to address edge cases in convolution are also discussed. Overall, the lecture highlights the power and potential of CNNs in a variety of life science applications.
The second part of the lecture covers various concepts related to convolutional neural networks (CNNs). In the lecture, the speaker talks about the importance of maintaining input size in CNNs, data augmentation as a means of achieving invariance to transformations, and different CNN architectures and their applications. The lecture also covers challenges associated with learning in deep CNNs, hyperparameters and their impact on overall performance, and approaches to hyperparameter tuning. The speaker emphasizes the importance of understanding the fundamental principles behind CNNs and highlights their versatility as a technique applicable in multiple settings.
Recurrent Neural Networks RNNs, Graph Neural Networks GNNs, Long Short Term Memory LSTMs - Lecture 04 (Spring 2021)
Recurrent Neural Networks RNNs, Graph Neural Networks GNNs, Long Short Term Memory LSTMs
This video covers a range of topics starting with recurrent neural networks (RNNs) and their ability to encode temporal context, which is critical for sequence learning. The speaker introduces the concept of hidden markov models and their limitations, which leads to the discussion of long short-term memory (LSTM) modules as a powerful approach to deal with long sequences. The video also discusses the transformer module, which learns temporal relationships without unrolling or using RNNs. Graph neural networks are introduced, and their potential applications in solving classic network problems and in computational biology. The talk concludes with a discussion of research frontiers in graph neural networks, such as their application in degenerative graph models and latent graph inference.
This second part of the video discusses Recurrent Neural Networks (RNNs), Graph Neural Networks (GNNs), and Long Short Term Memory (LSTM) modules. It explains how traditional feedforward neural networks have limitations when dealing with graph-based data, but GNNs can handle a wide range of invariances and propagate information across the graph. The speakers also discuss Graph Convolutional Networks (GCNs) and their advantages and challenges. Additionally, the video describes the importance of attention functions in making GNNs more powerful and flexible.
Interpretable Deep Learning - Deep Learning in Life Sciences - Lecture 05 (Spring 2021)
Interpretable Deep Learning - Deep Learning in Life Sciences - Lecture 05 (Spring 2021)
This video discusses the importance of interpretability in deep learning models, especially in the field of life sciences where decisions can have dire consequences. The speaker explains two types of interpretability: building it into the design of the model from the outset and developing post hoc interpretability methods for already-built models. They go on to explore different techniques for interpreting models, including weight visualization, surrogate model building, and activation maximization, and discuss the importance of understanding the internal representations of the model. The lecturer also explains several methods for interpreting individual decisions, such as example-based and attribution methods. Additionally, the speaker discusses the challenge of interpreting complex concepts and the limitations of neural network model interpretations, as well as exploring hypotheses related to the discontinuity of gradients in deep learning neural networks.
In the second part of the lecture, the speaker addressed the challenges of discontinuous gradients and saturated functions in deep learning models in the life sciences field. They proposed methods such as averaging small perturbations of input over multiple samples to obtain a smoother gradient, using random noise to highlight the salient features in image classification, and backpropagation techniques such as deconvolutional neural networks and guided backpropagation to interpret gene regulatory models. The speaker also discussed the quantitative evaluation of attribution methods, including the pixel flipping procedure and the remove and replace score approach. Finally, they emphasized the need for interpretability in deep learning models and the various techniques for achieving it.
Generative Models, Adversarial Networks GANs, Variational Autoencoders VAEs, Representation Learning - Lecture 06 (Spring 2021)
Generative Models, Adversarial Networks GANs, Variational Autoencoders VAEs, Representation Learning - Lecture 06 (Spring 2021)
This video discusses the concept of representation learning in machine learning, emphasizing its importance in classification tasks and potential for innovation in developing new architectures. Self-supervised and pretext tasks are introduced as ways to learn representations without requiring labeled data, through techniques such as autoencoders and variational autoencoders (VAEs). The speaker also discusses generative models, such as VAEs and generative adversarial networks (GANs), which can generate new data by manipulating the latent space representation. The pros and cons of each method are discussed, highlighting their effectiveness but also their limitations. Overall, the video provides a comprehensive overview of different approaches to representation learning and generative models in machine learning.
The video explores the concepts of Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and representation learning in generative models. GANs involve the generator and discriminator having opposing objectives, and the training process is slow for fake samples, but improvements in resolution and objective function can lead to realistic-looking images. The speaker demonstrates how GANs can generate architecturally plausible rooms and transfer one room to another. VAEs explicitly model density functions and capture the diversity of real-world images through meaningful latent space parameters. The speaker encourages creativity and experimentation with open architectures and models, and the application of generative models and representation learning in various domains is a rapidly growing field with limitless possibilities.
Regulatory Genomics - Deep Learning in Life Sciences - Lecture 07 (Spring 2021)
Regulatory Genomics - Deep Learning in Life Sciences - Lecture 07 (Spring 2021)
The lecture covers the field of regulatory genomics, including the biological foundations of gene regulation, classical methods for regulatory genomics, motif discovery using convolutional neural networks, and the use of machine learning models to understand how sequence encodes gene regulation properties. The speaker explains the importance of regulatory motifs in gene regulation and how disruptions to these motifs can lead to disease. They introduce a new model using a convolutional neural network that maps sequencing reads to the genome and counts how many five-prime ends each base pair on the two strands has. The model can be used for multiple readouts of different proteins and can be fitted separately or simultaneously using a multitask model. The speaker also shows how the model can analyze any kind of assay, including genomic data, using interpretation frameworks that uncover biological stories about how syntax affects TF cooperativity. The models can make predictions that are validated through high-resolution CRISPR experiments.
The video discusses how deep learning can improve the quality of low-coverage ATAC-seq data by enhancing and denoising signal peaks. AttackWorks is a deep learning model that takes in coverage data and uses a residual neural network architecture to improve signal accuracy and identify accessible chromatin sites. The speaker demonstrates how AttackWorks can be used to handle low quality data and increase the resolution of studying single-cell chromatin accessibility. They also describe a specific experiment on hematopoietic stem cells that used ATAC-seq to identify specific regulatory elements involved in lineage priming. The speaker invites students to reach out for internships or collaborations.
look like. Once they have this trained model, they can apply it to small populations of very few cells to predict what the data would have looked like if they had more cells to sequence. This approach significantly increases the resolution at which they can study single-cell chromatin accessibility, and they show that the models are transferable across experiments, cell types, and even species.