DataScience Digest — 28.05.21
The new issue of Data Science Digest is here! Hop to learn about the latest news, articles, tutorials, research papers, and event materials on DataScience, AI, ML, and BigData. All sections are prioritized for your convenience. Enjoy!
What was new last week?
Is it realistic for AI to make human programmers redundant? Can we make models more efficient, or should we use more efficient platforms to manage models? Also, DeepMind’s fight for autonomy and AI robots creating art — all in this week’s News section.
IBM’s AI research division has released a 14-million-sample dataset called Project CodeNet to develop machine learning models that can help in programming tasks. And yes, nobody’s going to let AI do the programming stuff on its own (so, programmers won’t disappear); rather, AI will help developers become more productive by automating a considerable portion of routine tasks.
With that being said, it’s important to bear in mind that AI and models are code, too. And as any piece of code, they need to be maintained, which requires computing resources. To address this challenge, the team at Credit Sesame came up with a so-called unified model approach. In it, a single model, rather than a set of related but separate models, is created to power a process or product, making entire AI systems more efficient through scaling.
New approaches to scaling models are exciting, and only new end-to-end platforms for machine learning can go above that (specifically, if it’s a release from GCP). Google’s Vertex AI is a managed ML platform that helps accelerate the deployment and maintenance of AI models and implement MLOps throughout the entire development lifecycle.
And speaking of Google, it seems that DeepMind has finally lost its fight to get more autonomy inside the corporation. Despite the sensitive research that DeepMind has been doing all these years (and concerns about Google’s access to this data), Google has moved to strengthen control over the AI division, not to lax it as many hoped.
On the positive side, maybe Google won’t use DeepMind’s AI technology to, you know, spy on us all but create art instead — just like Ai-Da, the AI robot that has debuted her work in London’s Design Museum. So, let’s stay positive!
The advance of AI means a lot for sports analytics. Given the amount of information we can collect, we can expect major breakthroughs in sports-related decision making. One of the first experiments in this area was performed by Deepmind and Liverpool Football Club, aiming to revolutionize the game for players, decision-makers, fans, and broadcasters.
PyCaret 101 — For BeginnersPyCaret is an open-source ML library and end-to-end model management tool for ML workflows automation. It can replace voluminous code with just a few lines, which only adds to its ease of use, simplicity, and ability to quickly and efficiently build and deploy end-to-end machine learning pipelines. In this article, you’ll learn how to get started with PyCaret.
Deforestation is a global issue that needs to be addressed on a larger scale. Fortunately, AI can help us detect areas suffering from deforestation much faster. In this article, you’ll learn about a full stack deep learning project that uses high-resolution from the Amazon rainforest to build and train a 95% accurate model that detects areas with loss of trees from space.
NVIDIA has been working hard to improve their deep learning stack. In this article, you’ll learn how to give it a try in a controlled environment. Specifically, you’ll set up a docker container to run an NVIDIA deepstream pipeline on a GPU or a Jetson. The article can serve as a point of reference for beginners looking to make pipelines with NVIDIA.
In this step-by-step, beginner-friendly tutorial, you’ll learn how you can use PyCaret’s Unsupervised Anomaly Detection Module to detect anomalies in time series data. We’ll start with the basics of anomaly detection and move forward to training and evaluating the anomaly detection model using PyCaret, to labeling anomalies and analyzing the results.
The Tree-based Pipeline Optimization Tool (TPOT) is a Python automated machine learning (AutoML) tool for optimizing ML pipelines through the use of genetic programming. In this article, you’ll learn how you can use TPOT to build a fully-automated prediction pipeline, from handling features and model selection to parameter optimization.
Vaex is a Python library that enables the fast processing of large datasets and that may serve you as a productive alternative to Pandas. In this article, you’ll explore the ins and outs of this library, including the basics, functions, virtual columns, and more. So, if you’re working with large datasets, consider giving Vaex a try.
Fake news is a false or misleading content presented as news in different formats. Fake news is considered to be a huge problem since it erodes trust in any content (even from respectable sources). Thankfully, AI/Ml can help identify fake news, and in this article we’ll look at how to apply text analytics and classical machine learning for that.
In this paper, the team by Yi Tay et al. explore the complex topic of Transformers, which are the de facto choice of model architectures, and how they compare to CNN-based pre-trained models. According to the team’s findings, CNN models can outperform Transformers. The team believes that both CCNs and Transformers should be considered independently.
In this paper, Dong Chen et al. formulate the mixed-traffic highway on-ramp merging problem as a multi-agent reinforcement learning (MARL) problem. They propose an efficient MARL framework that can be used in dynamic traffic. A novel safety supervisor is developed to significantly reduce collision rate and greatly expedite the training process.
In this research paper, Robin Strudel et al. reveal a new transformer model for semantic segmentation called Segmenter. In contrast to convolution-based approaches, Segmenter enables modeling of global context at the first layer and throughout the network. Segmenter is built on Vision Transformer, which capabilities were extended to semantic segmentation.
In this paper, you’ll find a collection of 8 Korean natural language understanding (NLU) tasks, including Topic Classification, SemanticTextual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking (with pre-trained models for each task).
In this paper, Hanxiao Liu et al. propose a simple attention-free network architecture, gMLP, based solely on MLPs with gating, and argue that it can perform as good as Transformers in key language and vision applications. In the series of experiments, they show that gMLP can scale increasingly efficiently to rival Transformers over increased data and compute.
In this paper, an international team of researchers introduce APPS, a benchmark for code generation, to assess code generation performance. It measures the ability of models to take an arbitrary natural language spec and generate Python code fulfilling this specification. The benchmark allows them to evaluate models by checking their generated code on test cases.
GTC On DemandThis is the ultimate collection of events, sessions, presentations, and other materials that NVIDIA releases every week. Get access to training, insights, innovative approaches, and intriguing discussions from NVIDIA quickly and easily.