官术网_书友最值得收藏!

Natural language processing using a hashing vectorizer and tf-idf with scikit-learn

We often find in data science that the objects we wish to analyze are textual. For example, they might be tweets, articles, or network logs. Since our algorithms require numerical inputs, we must find a way to convert such text into numerical features. To this end, we utilize a sequence of techniques.

A token is a unit of text. For example, we may specify that our tokens are words, sentences, or characters. A count vectorizer takes textual input and then outputs a vector consisting of the counts of the textual tokens. A hashing vectorizer is a variation on the count vectorizer that sets out to be faster and more scalable, at the cost of interpretability and hashing collisions. Though it can be useful, just having the counts of the words appearing in a document corpus can be misleading. The reason is that, often, unimportant words, such as the and a (known as stop words) have a high frequency of occurrence, and hence little informative content. For reasons such as this, we often give words different weights to offset this. The main technique for doing so is tf-idf, which stands for Term-Frequency, Inverse-Document-Frequency. The main idea is that we account for the number of times a term occurs, but discount it by the number of documents it occurs in.

In cybersecurity, text data is omnipresent; event logs, conversational transcripts, and lists of function names are just a few examples. Consequently, it is essential to be able to work with such data, something you'll learn in this recipe.

主站蜘蛛池模板: 遂川县| 宁明县| SHOW| 芜湖市| 汕头市| 昔阳县| 桐梓县| 霸州市| 嘉荫县| 伊宁市| 疏附县| 融水| 铁力市| 斗六市| 两当县| 泾川县| 会东县| 若羌县| 封丘县| 星座| 澄江县| 靖江市| 宁强县| 宁海县| 哈巴河县| 华阴市| 镇赉县| 石台县| 平远县| 得荣县| 塔河县| 双牌县| 雷波县| 炉霍县| 红安县| 湟源县| 平和县| 新和县| 嘉义县| 蒲江县| 镇沅|