Brief analysis of current status and realistic potential for adoption
Natural Language Processing (NLP) techniques are often defined as a subset of the Artificial Intelligence technologies, and they are getting wide adoption and acceptance among companies and individuals. Not without reason…
The “processing” piece means that we can read text-based information, but also understand its context or even the intent from that unstructured data. This is a powerful cognitive ability that supports plenty of processes and increases human capabilities around the world. For example, the Port of Montreal used NLP and AI models to detect and distribute critically important cargo during the most difficult months of the pandemic in 2020. NLP-enabled solutions took care of tedious or repetitive manual “reading” operations to extract insights and support human decision-making, and this is just an example.
The interesting part of NLP is its maturity as technology (or set of technologies). There are different levels of performance depending on the implemented model, but also a realistic understanding of the trade-off between results and requirements. This is key for massive adoption. While the biggest companies focus on state-of-the-art models such as GPT-3 (or equivalents), other organizations are either implementing older techniques, or leveraging cloud-native capabilities via Azure, AWS, GCP, etc. Initial requirements and implementation times vary, but the end goal is the same.
The public acceptance part is also very important, especially now that international AI and data privacy regulations are maturing and getting more restrictive. Other AI technologies such as image detection and face recognition are being analyzed, evaluated, and even abandoned due to justified concerns about bias and questionable performance. NLP is not exempt from ethical and social concerns, but they are mostly related to the replication of negative patterns from the language itself, rather than to the model performance. There are also environmental concerns and discussions going on, but they focus on the last generation of powerful GPT-n models and not on the whole set of technologies.
Summarizing, if we have the opportunity to use scalable technologies to augment human capabilities (e.g., extracting key information from thousands of documents in some seconds), there are different levels of complexity depending on the adopter’s internal capabilities, and the potential ethical impact is reduced and continuously monitored, we can affirm that the future looks bright for NLP and its pragmatic combination with other technologies.
Adrian Gonzalez Sanchez