After you post a zipped folder of .txt documents to our API, we process the request asynchronously and email you a link to the corresponding file of representations once they're ready. This file consists of easily manipulated numpy arrays, which can be unpickled in Python.
You don't need huge labeled datasets, expensive deep learning consultants, and enormous hosting costs. Just consume the API and build!
Metonymi also supports thought vector comparisons. Suppose you would like to compare the meanings of a collection of social media posts to each other. Metonymi allows for superior similarity metrics to be formulated since the representations are low-dimensional as compared to sparse TF-IDF representations, which can be tens of thousands of dimensions on a large corpus. Classical sparse features take none of the semantic nuance of the posts into account: “technology shapes man” is the same “man shapes technology” in TF-IDF representations, but these two documents will have completely different Metonymi representations.
Applications to Kaggle are easy to see. You have a labeled text dataset with a complex task ahead of you. Simple sparse feature approaches like Bag of Words have failed, because the task requires features dependent on syntax and the relationships between words. The Kaggle dataset would be passed through our API and you would retrieve Metonymi’s representation for each document, which would then be used to improve your Kaggle ensemble.