Keep-Current - March 2021 Edition
Every month, we deliver a community update with a list of the latest articles and technologies, in various topics, which worth knowing about.
Natural Language Processing (NLP)
Projects and research about Natural Language Processing
Language generation with a knowledge base
imagine this: Upon stumbling into an issue, you ask for help in a dedicated product knowledge-base by freely formulating your question in plain language. The engine searches across all the company wiki pages and formulates an answer for you!
Most of the language generation models are based on a language model that encapsulates all the information it read. Researchers from Facebook AI, UCL, and NYU, together with HuggingFace, created a new approach, Retrieval-Augmented Generation (or RAG for short), that generates language content based on retrieved facts from the wiki website - and it even works quite well!
Paper: https://arxiv.org/abs/2005.11401
Demo: https://huggingface.co/rag/
Documentation: https://huggingface.co/transformers/master/model_doc/rag.html
Semantic shift detector
"Semantic change" or "semantic shift" is the evolution process of words, throughout they gain a new meaning in which they are used more frequently than before. For example - "Apple" and "Amazon" have shifted over the years to describe companies; "cough" and "fever" are nowadays symptoms that describe COVID-19 symptoms, and not only a flu.
When practicing Natural Language Processing with Deep-Learning it is important to pay attention to these shifts between texts, especially when dealing with text that changes over time.
A new tool - TextEssence - aims to assist by comparing embedding entities and their distances to other words in the corpora.
Although the open-source code is still under construction, a demo is available, and it looks promising:
BERT Adapts
Since BERT showed up in 2018 and its models were publicly available (also for pyTorch), heavy research was conducted on transfer learning and fine-tuning, to make use of BERT for tasks such as Named Entity Recognition (NER) and Question Answering (QA). Several different methods were suggested for performing fine-tuning. One of these methods, an adapter-based one, is robust enough to even be composed and stacked, hence giving BERT new capabilities in which it was not directly trained on.
Taking it a step further, HuggingFace opened a model hub only for these adapters. Transfer learning was never that accessible!
Computer Vision
Layout detection
Layout Parser is an opensource code and model zoo hub with many models that can be combined with an OCR tools to parse and divide scanned documents into sections, which each can be separately processed by the OCR tool, and its content can be extracted separately.
It's works pretty well on academic papers, scanned documents with tables and images.
Graph Neural Networks (GNN)
Research articles and tools about the emerging domain of Graph Neural Networks
Reason and compare
Interpretability is a crucial factor of machine learning. However, neural-networks operating on graphs (GNN) were missing so far a proper solution to produce prediction explanation.
DIG - "Dive Into Graphs" is a new frameworks, which complement other libraries, such as PyTorch-Geometric, by enabling, among other things, comparison of graph models to baseline models, generating predictions explanations, performing deep-learning on 3D object-graphs (meshes), and more.
Self Supervised Graphs
Self supervised learning (SSL) is a successful technique which boosted NLP (word2vec, BERT, GPT-3) as well as computer vision. However, SSL for Graph Neural Networks (GNN) is still lagging behind.
To the existing SSL methods for GNN, which you can read more about in these reviews, a new method was recently added - SelfGNN
paper: https://arxiv.org/abs/2103.14958
In short, they use an implicit negative sampling (randomly replacing the expected predicted value with a false one), mimic Siamese Network and use data augmentation techniques.
Hopefully, more methods that enable transfer learning also for graphs will see the light soon.
Interpretability and Reasoning
Frameworks and software that helps clarifying models' predictions
Language Interpretability Tool (LIT)
LIT is a project from Google's People and AI Research (PAIR). Similar to the previously released what-if tool for reasoning general machine learning models, LIT is a framework to explore Natural Language Processing models. It supports both pyTorch and TensorFlow and includes embedding visualization in 3D. It helps exploring the data points, on which the model does not perform well, and visualizing which word(s) contributed to the prediction.
MLOps
Projects and articles about Machine Learning Dev-Ops
Easy Way to Production
BentoML.ai is a Machine Learning Operation (MLOps) framework, making it easy to deploy machine learning models into production.
The models are wrapped with an API and are optimized with micro batching to perform inference quickly. The API include end-points for health check and metrics reporting, and can be exported to a swagger format.
Many known frameworks and deployment methods are supported: Docker, Kubernetes (K8S), AWS Cloud, PyTorch, TensorFlow, spaCy, fast.ai, to name a few, and many more.
Lightning Deep Speed
Training large models, with billions of parameters may take days, weeks and even months, depends on the amount of available GPU. A new guide explains how to combine PyTorch Lightning and DeepSpeed and demonstrate how to scale models with just a few lines of code.
From the web
Interesting projects and articles from around the web
AI for Mental-health Awareness
An interesting project by Over the Bridge, a non profit organization, used AI to raise awareness to people who struggle with mental health in the music industry.
They fed a TensorFlow machine learning algorithm with music and lyrics of musicians from the "27 club", musicians who died at the age of 27, such as Jim Morrison (the Doors), and Kurt Cobain (Nirvana). Then, with techniques, such as transfer learning and GANs, they trained a generative model to create new lyrics and music in the style of these artists.
The model output was collected and edited by (human) sound engineers, which released 4 tracks, available for listening here:
That's it for this time, see you next month!