Improving on previous work
Metonymi expands on simple neural network protocols like word2vec, which allow for interesting comparisons between mathematical representations of individual words. Metonymi creates mathematical representations for entire text documents, allowing for word sense to be incorporated into the representations.
In this way, we provide a broad linguistic representation, which allows for understanding the nuances of text data and language more generally.
Metonymi’s technology has learned skills that are transferable to tasks for which it has not specifically been trained. These skills come in the form of thought vectors, the mathematical representations of individual text documents. The representations are not limited to a specific task, but have proven useful for a variety of complex NLP tasks. This ability is called “transfer learning.”
Our technology is trained on unlabeled data exclusively. Since we’re not hindered by shortages of expensive labeled data, we can take full advantage of an abundance of unlabeled text sources. We leverage much more data and give you more generalized representations.
It is this unsupervised objective that allows the technology to create polyvalent document representations, useful for many tasks and data domains. The representations are low-dimensional and easily integrated to solve data science problems.