You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Machine Learning for Pathology - Lecture 19
Machine Learning for Pathology - Lecture 19 - MIT Deep Learning in the Life Sciences (Spring 2021)
The lecture covers various aspects of the application of deep learning in computational pathology, including the challenges and limitations of the technology. The speaker discusses the need for caution in trusting algorithms blindly and emphasizes the importance of understanding what a network is learning. The lecture explores several examples of how deep learning is being used in cancer diagnosis, prognosis, and treatment response assessment to develop prognostic and predictive tools for precision medicine. The speaker also discusses the challenges of developing multi-drug treatments for tuberculosis and proposes various lab projects to tackle the issue. Overall, the lecture underscores the potential of deep learning in pathology, while also acknowledging its limitations and the need for a multi-disciplinary approach to ensure its effective deployment in clinical settings.
In this YouTube video titled "Machine Learning for Pathology - Lecture 19 - MIT Deep Learning in the Life Sciences (Spring 2021)," the speaker discusses their team's attempts to address batch to batch and cell to cell heterogeneity in machine learning for pathology using typical variation normalization (TVN) and a k-nearest neighbor approach. They also describe using morphological profiling to classify drugs based on their effects on bacteria and developing a data-driven approach to designing and prioritizing combinations of drugs using both supervised and unsupervised learning. Additionally, the speaker thanks her lab members for their contributions to drug synergy versus antagonism studies, highlighting the importance of considering the larger context for understanding and advancing research in the field.
Deep Learning for Cell Imaging Segmentation - Lecture 20
Deep Learning for Cell Imaging Segmentation - Lecture 20 - MIT ML in Life Sciences (Spring 2021)
In this video, the speakers discuss the use of deep learning for cell tracking, which involves determining the movement of cells in time-lapse imaging. They explain that traditional manual tracking methods are costly and time-consuming, and that deep learning methods can significantly speed up the process while also providing higher accuracy. The speakers discuss various deep learning architectures for cell tracking, including U-Net, StarDist, and DeepCell. They also note that one of the challenges in cell tracking is distinguishing between cells that are close together or overlap, and that methods such as multi-object tracking or graph-based approaches can help address this issue. The speakers emphasize the importance of benchmarking different deep learning methods for cell tracking and providing open access datasets for reproducibility and comparison. They also highlight the potential applications of cell tracking in various fields, such as cancer research and drug discovery.
Deep Learning Image Registration and Analysis - Lecture 21
Deep Learning Image Registration and Analysis - Lecture 21 - MIT ML in Life Sciences (Spring 2021)
In this lecture, Adrian Dalock delves into the topic of aligning medical images and the optimization problem behind it. He proposes a novel method called voxel morph, which involves using unlabeled data sets to train neural networks for image registration. The speaker also discusses the challenge of robustness to new data and sequences that neural networks have not seen before and proposes simulating diverse and extreme conditions to train robust models. The speaker compares classical registration models to voxel morph and synthmorph models, with the latter being remarkably robust. Lastly, the speaker discusses the development of a function that generates templates based on desired properties rather than learning a template directly and the potential use of capsule video endoscopy for detecting colon abnormalities.
The speaker in this lecture discusses various machine learning approaches to overcome the lack of medical data, specifically in the context of colonoscopy videos for polyp detection. They introduce a deep learning image registration and analysis architecture that utilizes pre-trained weights and random initialization to address domain shift and improve performance. The lecture also covers weakly supervised learning, self-supervised learning, and weakly supervised video segmentation. The speaker acknowledges the challenges faced in using machine learning approaches in medical data analysis and encourages testing these approaches in real medical procedures to reduce workload.
Electronic health records - Lecture 22
Electronic health records - Lecture 22 - Deep Learning in Life Sciences (Spring 2021)
The emergence of machine learning in healthcare is due to the adoption of electronic medical records in hospitals and the vast amount of patient data that can be utilized for meaningful healthcare insights. Disease progression modeling is discussed utilizing longitudinal data found in disease registries, which can pose challenges due to high-dimensional longitudinal data, missingness, and left and right censorship. The lecture explores the use of non-linear models like deep Markov models to handle these challenges and effectively model the non-linear density of longitudinal biomarkers. Additionally, the speaker discusses the use of domain knowledge to develop new neural architectures for the transition function and the importance of incorporating domain knowledge into model design for better generalization. There is also experimentation with model complexity in regards to treatment effect functions, and the speaker plans to revisit this question on a larger cohort to determine further findings.
Deep Learning and Neuroscience - Lecture 23
Deep Learning and Neuroscience - Lecture 23 - Deep Learning in Life Sciences (Spring 2021)
The lecture discusses the interplay between deep learning and neuroscience, specifically in the area of visual science. The goal is to reverse engineer human visual intelligence, which refers to the behavioral capabilities that humans exhibit in response to photons striking their eyes. The speaker emphasizes explaining these capabilities in the language of mechanisms, such as networks of simulated neurons, to enable predictive built systems that can benefit both brain sciences and artificial intelligence. The lecture explores how deep learning models are hypotheses for how the brain executes sensory system processes and the potential applications beyond just mimicking the brain's evolution. Furthermore, the lecture shows practical examples of how neural networks can manipulate memories and change the meaning of something.
This video discusses the potential of deep learning in understanding the cognitive functions of the brain and leveraging this understanding for engineering purposes. The speaker highlights the relevance of recurrent neural networks with their memory and internal dynamics capabilities in this area. The lecture explores the ability of neural systems to learn through imitation and how this can be used to learn representations, computations, and manipulations of working memory. The video also covers the difficulty in finding evidence of feedback learning as a learning condition and the potential of error-correcting mechanisms to tune the system. The lecture concludes by reflecting on the diversity of topics covered in the course and how deep learning can aid in interpreting cognitive systems in the future.
MIT 6.S192 - Lecture 1: Computational Aesthetics, Design, Art | Learning by Generating
MIT 6.S192 - Lecture 1: Computational Aesthetics, Design, Art | Learning by Generating
This lecture covers a variety of topics related to computational aesthetics, design, and art. The role of AI in democratizing access to art creation, design automation, and pushing the boundaries of art is discussed, as well as the challenges in quantifying aesthetics and achieving visual balance in design using high level and low-level representations. The lecturer also highlights the potential of computational design to uncover patterns and convey messages effectively, with examples involving color semantics and magazine cover design. Crowdsourcing experiments are used to determine color associations with various topics and the potential applications of this method in different areas are explored. Overall, the lecture introduces the role of AI in creative applications and the potential to revolutionize the way we create art, design, and other forms of creative expression.
The video discusses the use of computational aesthetics, design, and art to generate creative works using generative models, such as StyleGAN and DALL-E. The lecturer also emphasizes the importance of learning by generating and encourages viewers to break down problems and use data to come up with innovative and creative solutions. However, the speaker also addresses the limitations of generative models, such as biased data and the ability to generalize and think outside the box. Nonetheless, the lecturer assigns students to review the provided code and experiment with the various techniques for generating aesthetically pleasing images while encouraging participation in a socratic debate between Berkeley and MIT on computational aesthetics and design.
MIT 6.S192 - Lecture 2: A Socratic debate, Alyosha Efros and Phillip Isola
MIT 6.S192 - Lecture 2: A Socratic debate, Alyosha Efros and Phillip Isola
In this video, Alyosha Efros and Phillip Isola discuss the idea of using images to create shared experiences. They argue that this can help to bring back memories and create a sense of nostalgia.
This video is a debate between two professors at MIT about the role of data in artificial intelligence. Efros argues that data is essential to AI, while Isola counters that data can be a hindrance to AI development.
to visualize the concept of what it means for something to be memorable.
MIT 6.S192 - Lecture 3: "Efficient GANs" by Jun-Yan Zhu
MIT 6.S192 - Lecture 3: "Efficient GANs" by Jun-Yan Zhu
The lecture covers the challenges of training GAN models, including the need for high computation, large amounts of data, and complicated algorithms that require extensive training sessions. However, the lecturer introduces new methods that make GANs learn faster and train on fewer datasets, such as compressing teacher models using the general-purpose framework of GANs compression, differentiable augmentation, and data augmentation. The lecture also demonstrates interactive image editing with GANs and emphasizes the importance of large and diverse datasets for successful GAN training. The codes for running the model are available on GitHub with step-by-step instructions for running the model on different types of data. The lecture concludes by discussing the importance of model compression for practical purposes.
MIT 6.S192 - Lecture 5: "Painting with the Neurons of a GAN" by David Bau
MIT 6.S192 - Lecture 5: "Painting with the Neurons of a GAN" by David Bau
David Bau discusses the evolution of machine learning and the potential for creating self-programming systems. He introduces generative adversarial networks (GANs) and explains how they can be trained to generate realistic images. Bau discusses his process for identifying correlations between specific neurons in a Progressive GAN and certain semantic features in generated images. He demonstrates how he can add various elements to an image, such as doors, grass, and trees, with the help of a GAN. Additionally, he discusses the challenge of adding new elements to a GAN and the ethical concerns surrounding the realistic renderings of the world.
MIT 6.S192 - Lecture 7: "The Shape of Art History in the Eyes of the Machine " by Ahmed Elgemal
MIT 6.S192 - Lecture 7: "The Shape of Art History in the Eyes of the Machine " by Ahmed Elgemal
Ahmed Elgamal, a professor of Computer Science and founder of the Art and Artificial Intelligence Lab, discusses the use of AI in understanding and generating human-level creative products. Elgamal discusses the scientific approach to art history and the importance of advancing AI to understand art as humans do. He also discusses the use of machine learning to classify art styles, analyzing the internal representations, identifying differences between art styles, and quantifying creativity in art through AI. Elgamal also proposes the concept of primary objects in art history and explores the potential for AI to generate art, recognizing the limitations of current AI approaches in creative pursuits. However, Elgamal also discusses ongoing experiments to push the AI network boundaries to create abstract and interesting art.
Ahmed Elgammal also discusses the results of a tuning test to determine if humans can distinguish art created by a GAN from that of humans, using artworks as a baseline. Humans thought art made by GAN machines was produced by humans 75% of the time, emphasizing the concept of style ambiguity and its importance in connecting computer vision and machine learning with art history and artistic interests.