30-04-2021



This tutorial goes over some basic concepts and commands for text processing in R. R is not the only way to process text, nor is it always the best way. Python is the de-facto programming language for processing text, with a lot of built-in functionality that makes it easy to use, and pretty fast, as well as a number of very mature and full featured packages such as NLTK and textblob. Basic shell scripting can also be many orders of magnitude faster for processing extremely large text corpora -- for a classic reference see Unix for Poets. Yet there are good reasons to want to use R for text processing, namely that we can do it, and that we can fit it in with the rest of our analyses. Furthermore, there is a lot of very active development going on in the R text analysis community right now (see especially the quanteda package). I primarily make use of the stringr package for the following tutorial, so you will want to install it:

Regexrx

I came across an application that I think will make my life a lots easier, RegExRX is a regular expression development tool. A complete regular expression development tool meant for novices and professionals alike, this editor has many features designed to help in the.

RegExRX

First some terms. “RegExRX” means any version of the RegExRX application. “Mac App Store version” means the version of RegExRX that is available exclusively through the Mac App Store. “Standalone version” means the version of RegExRX that is available through the Downloads page on this site. Based on the PCRE library, RegExRX will allow a user to craft patterns that are compatible with most regular expression flavors and will let them easily copy those patterns to other languages like. Kem Tekinay is a Mac consultant and programmer who has been using Xojo since its first release to create custom solutions for clients. He is the author of the popular utilities TFTP Client and RegExRX (both written with Xojo) and lives in Connecticut with his wife Lisa, and two cats. RegExRX mac破解版是多功能正则表达式的编辑与设计软件,以帮助开发和存储的正则表达式。基于PCRE库,RegExRX将允许用户手动操作与正则表达式兼容的图案,将让他们很容易地复制这些模式和其他语言,如Perl, Ruby, PHP, JavaScript, AppleScript, 4D, 和 Xojo。.

I have also had success linking a number of text processing libraries written in other languages up to R (although covering how to do this is beyond the scope of this tutorial). Here are links to my two favorite libraries:

  • The Stanford CoreNLP libraries do a whole bunch of awesome things including tokenization and part-of-speech tagging. They are much faster than the implementation in the OpenNLP R package.
  • MALLET does a whole bunch of useful statistical analysis of text, including an extremely fast implementation of LDA. You can check out examples here, but download it from the first link above.

Regular Expressions

Regular expressions are a way of specifying rules that describe a class of strings (for example -- every word that starts with the letter 'a') that are more succinct and general than simply generating a dictionary and checking against every possible value that meets some rule. They are foundational to lots of different text processing tasks where we want to count types of terms (for example), or identify things like email addresses in documents. If you want build your competency with text analysis in R, they are definitely a necessary tool. You can start by checking out this link to an overview of regular expressions, and then take a look at this primer on using regular expressions in R. What is important to understand is that they can be far more powerful than simple string matching.

If you want to get started using regular expressions, you can check out the tutorials posted above, but I have also found it very helpful to just start trying out examples and seeing how they work. One simple way to do this is to use an online app with a graphical interface that highlights matches, such as the one provided here. I personally prefer the RegExRx app, which should work on OSX and Windows and is available either as a shareware version or as a paid app on the Apple App Store. This program includes support for Perl style Regular Expressions which are quite common and are used by some R packages. Whichever program you choose, I would suggest just messing around and reading random articles on the internet for a few hours before you get started using Regular Expressions in R. I also tend to use one of these programs to prototype any complex RegEx I want to use in production code.

Example Commands

Lets start with an example string.

First we can lowercase the entire string -- often a good starting place. This will prevent any future string matching from treating 'Example' and 'example' as distinct words, for example, just because one came at the beginning of a sentence:

We can also take a second string and paste it on the end of the first string:

Now we might want to split our string up into a number of strings, we can do this by using the str_split() function, available as part of the stringr R package. The following line will split the combined string from above on exclamation points:

Notice that the splitting character gets deleted, but we are now left with essentially two sentences, each stored as a separate string. Furthermore, note that a list object is returned the str_split() function, so to access the actual vector containing the split strings, we need to use the [[ ]] list operator and get the first entry.

Now, let's imagine we are interested in sentences that contain questions marks. We can search for the string in the resulting my_string_vector that contains a '?' by using the grep() command. This command will return the index in the input vector that contains what we are looking for, or nothing if it could not find a match.

One thing you may notice is that the above string does not have just a '?', but a '?'. The reason for this is that the '?' is actually a special character when it is used in a regular expression, so we need to escape it with a '. However, due to the way that strings get passed in to the underlying C function from R, we actually need a second ' to ensure that one of them is present when the input is provided to C. You will get the hang of this with practice, but may want to check out this list of special characters that need to be escaped (have ' added infront of them) to make them 'literal'. We may also want to check if each individual string in my_string_vector contains a question mark. This can be very useful for conditional statements -- for example, if we are processing lines of a webpage, we may want to handle lines with header tags <h1> differently than those without header tags, so using a conditional statement with a logical grep, grepl(), may be very useful to us. This function takes any number of strings as input and returns a logical vector of equal length with TRUE entries where a match was found, and FALSE entries where one was not. Lets look at an example:

There are two other very useful functions that I use quite frequently. The first replaces all instances of some character(s) with another character. We can do this with the str_replace_all() function, which is detailed below:

Note that the first argument is the object where we want to replace characters, the second is the thing we want to replace, and the third is what we want to replace it with. If the function does not find anything to replace, it just returns the input unaltered. Another thing I do all the time is extract all numbers (for example) from a string using the str_extract_all() function:

Regex editorRegexrx

note here that we used out first real regex -- [0-9]+ which translates to 'match any substring that is one or more contiguous numbers'. Here we will get back a character vector of length equal to the number of matches we found, containing the matches themselves. These are just a few of the many powerful commands available to process text in R. I have also only shown you a simple Regular Expression. There is so much to learn in this domain that it can feel overwhelming when you are starting out, so I would suggest starting by using these tools and then Googling to expand your abilities as you need to deal with more complicated chunks of text or text processing tasks.

Cleaning Text

One of the most common things we might want to do is read in, clean, and 'tokenize' (split into individual words) a raw input text file. There are a number of packages that make this quite easy to do in R (I recommend and use quanteda). Later in this tutorial (and more generally in your own work), you will find it much easier to use built-in functions like those in quanteda to do these tasks, but I think it is valuable and instructive to learn a bit about what goes on behind the curtain when we talk about cleaning or tokenizing text. To do this ourselves, we will want to make use of two functions, the first of these will clean an individual string, removing any characters that are not letters, lowercasing everything, and getting rid of additional spaces between words before tokenizing the resulting text and returning a vector of individual words:

Lets give this function a try by entering the bit of code above in the console (thus defining the function), and then cleaning and tokenizing the following sentence:

As we can see, all of the special characters have been removed and we are left with a well-behaved vector of individual words. Now we will want to scale this up to working on an entire input document, to do so, we will want to loop over the input lines of that document and in addition to returning the cleaned text itself, we may also want to return some useful metadata like the total number of tokens, or the set of unique tokens. We can do so using the following function:

Now let's give it a try. You can download the plain text of a speech given by Barak Obama on February 24, 2009 to a joint session of Congress from the University of Virginia Miller Center Presidential Speech Archive by clicking the link here. Once you have save this file, you will want to set your working directory in R to the folder where you saved it and then read it in to R using the following lines of code:

You can now run it through the Clean_Text_Block() function and then take a look at the output:

We can see that there are a total of 6146 words in the document, with 1460 of them being unique. You are now past one of the biggest hurdles in text analysis, getting your data into R and in a reasonable format.

Generating A Document-Term Matrix by Hand

One of the things we will want to do most often for social science analyses of text data is generate a document-term matrix. This can be done very easily (and robustly) using existing software, and I detail how to do this in the next section. Again, the goal here is just to reveal some of what is going on behind the scenes when we form a document-term matrix. This is actually a relatively challenging programming task, and it is also usually very computationally intensive, so I will be using functions written in C++ in order to accomplish this task. You do not need to fully understand the rest of this section in order to be able to use a package like quanteda to form a document-term matrix, so if you are pressed for time, feel free to just skim this section. Before going any further, I suggest you check out my tutorial Using C++ and R Code Together with Rcpp to get the basics of C++ programming under your belt. You may also need to follow some of the steps at the beginning of this tutorial before you will even be able to install the Rcpp package and get it working, especially if you are using Windows or a certain versions of Mac OS X. Before you go any further, you will want to make sure you have the following packages installed:

The BH package is not essential for sourcing the function below, but it is a good idea to have installed for use with future C++ functions. Now, let's take a look at a C++ function that will help us generate a document term matrix:

You can download the source file for the C++ code you see above by clicking the link here. Once you have saved the file somewhere where you can access it (the example code below assumes it is in your working directory), you can now Rcpp::sourceCpp() the code which will give you access to an R function that has C++ code under the hood.

You can now use the function, so let's try it out on a toy example. The first thing we will want to do is get a second document so we can make a document-term matrix that contains more than just one document. You can download another Obama Speech (this time his 2010 state of the union) by clicking the link here. We can now read in and tokenize this piece of text as follows:

Now we are ready to set things up and use our document word matrix generator function:

Once we have generated this matrix, we can use it for all sorts of analyses from statistical topic models like LDA (using the topicmodels package, for example) to just including the counts of them in a regression model. It is important to note that while the approach outlined above technically works, it is both much slower and less full featured than some of the functionality included in some of the R packages for text analysis. In general, you should just use one of these packages in your own research, but hopefully now with a bit more understanding of the kinds of things that are going on under the hood.

Using quanteda for Text Processing

The previous section focused on illustrating some very basic tools and under the hood functionality necessary to generate a document-term matrix. However, there are easier ways to do this. One of the most full-function packages for doing text processing (including in multiple languages) in R is the quanteda package. If we want to use the package, we will first have to install it:

Now let's say we want to work with the same two speeches from the previous example. We can generate a document term matrix using the following snippet of code:

This is certainly easier and more efficient than writing the code yourself. In general using quanteda to generate document-term matrices makes a lot of sense for ingesting most text corpora. In fact, this is what I currently do in all of my research code. One of the particularly useful features of the quanteda package is that it automatically stores document-term matrices as sparse matrix objects, which tends to be enormously more space efficient than using dense matrices.

Regexexp

Download

If you are interested in working with the Stanford CoreNLP and MALLET libraries from R , I have a (beta) R package that wraps these libraries, along with providing a number of utility and document comparison functions. This package is meant to serve as a complement to the quanteda package, and may be a good option if the user is interested in heavy NLP applications in R The package is available on GitHub here: https://github.com/matthewjdenny/SpeedReader.

Thank you for checking out this tutorial, and please shoot me an email if you are interested in any new inclusions. If you are interested in trying to run the R code from this tutorial you can download the .R file here.