A simple NLP library allows profiling datasets with one or more text columns.
When given a dataset and a column name containing text data, NLP Profiler will return either high-level insights or low-level/granular statistical information about the text in that column.
In short: Think of it as using the pandas.describe()
function or running Pandas Profiling on your data frame, but for datasets containing text columns rather than the usual columnar datasets.
More detail: so what do you get from the library:
- input a Pandas dataframe series as input param
- and you get back a new dataframe with various features about the parsed text per row
- high-level: sentiment analysis, objectivity/subjectivity analysis, spelling quality check, grammar quality check, etc...
- low-level/granular: number of characters in the sentence, number of words, number of emojis, number of words, etc...
- from the above numerical data in the resulting dataframe descriptive statistics can be drawn using the
pandas.describe()
on the dataframe
See screenshots under the Jupyter section and also under Screenshots for further illustrations.
Under the hood it does make use of a number of libraries that are popular in the AI and ML communities, but we can extend it's functionality by replacing or adding other libraries as well.
A simple notebook have been provided to illustrate the usage of the library.
Note: this is a new endeavour and it's probably NOT capable of doing many things yet, including running at scale. Many of these gaps are opportunities we can work on and plug, as we go along using it.
- Python 3.7.x or higher
- Dependencies described in the
requirements.txt
- (Optional)
- Jupyter Lab (on your local machine)
- Google Colab account
- Kaggle account
Take a look at this short demo of the NLP Profiler library by clicking on the below image:
or you find the rest of the talk here.
Install directly from the GitHub repo:
pip install git+https://github.com/neomatrix369/nlp_profiler.git@master
import nlp_profiler.core as nlpprof
new_text_column_dataset = nlpprof.apply_text_profiling(dataset['text_column'])
or
from nlp_profiler.core import apply_text_profiling
new_text_column_dataset = apply_text_profiling(dataset['text_column'])
See Notebooks section for further illustrations.
See Jupyter Notebook
You can open these notebooks directly in Google Colab
Notebook/Kernel | Script | Other related links
Contributions are very welcome, please share back with the wider community (and get credited for it)!
Please have a look at the CONTRIBUTING guidelines, also have a read about our licensing (and warranty) policy.
Go to the NLP page