WebScraping As Sourcing Technique For NLP

Introduction

In this post, we provide a series of web scraping examples and reference for people looking to bootstrap text for a language model. The advantage is that a greater number of spoken speech domains could be covered. Newer vocabulary or possibly very common slang is picked up through this method since most corporate language managers do not often interact with this type of speech.

Most people would not consider Spanish necessarily under resourced. However, considering the word error rate in products like the Speech Recognition feature on a Hyundai, Mercedes Benz or Spanish text classification on social media platforms, which is skewed towards English centric content, there seems to certainly be a performance gap between contemporary #Spanish speech in the US and products developed for that demographic of speakers.

Lyrics are a great reference point for spoken #speech. This contrasts greatly with long form news articles, which are almost academic in tone. Read speech also carries a certain intonation, which does not reflect the short, abbreviated or ellipses patterning common to spoken speech. As such, knowing how to parse the letras.com pages may be a good idea for those refining and expanding language models with “real world speech”.

Overview:

  • Point to Letras.com
  • Retrieve Artist
  • Retrieve Artist Songs
  • Generate individual texts for songs until complete.
  • Repeat until all artists in artists file are retrieved.

The above steps are very abbreviated and even the description below perhaps too short. If you’re a beginner, feel free to reach out to lezama@lacartita.com. I’d rather deal with the beginner more directly; experienced python programmers should have no issue with the present documentation or modifying the basic script and idea to their liking.

Sourcing

In NLP, the number one issue will never be a lack of innovative techniques, community or documentation for commonly used libraries. The number one issue is and will continue to be a proper sourcing and development of training data.

Many practitioners have found that the lack of accurate, use case specific data are better than a generalized solution, like BERT or other large language models. These issues are most evident in languages, like Spanish, that do not have as high of a presence in the resources that compose BERT, like Wikipedia and Reddit.

Song Lyrics As Useful Test Case

At a high level, we created a list of relevant artists: Artists then looped through the list to search in lyrics.com whether they had any songs for them. Once we found that the request yielded a result, we looped through the individual songs for each artists.

Lyrics are a great reference point for spoken speech. This contrasts greatly with long form news articles, which are almost academic in tone. Read speech also carries a certain intonation, which does not reflect the short form, abbreviated or ellipsis that characterizes spoken speech. As such, knowing how to parse the https://letras.com resource may be a good idea for those refining and expanding language models with “real world speech”.

Requests, BS4

The proper acquisition of data can be accomplished with BeautifulSoup. The library has been around for over 10 years and it offers an easy way to process HTML or XML parse trees in python; you can think of BS as a way to acquire the useful content of an html page – everything bounded by tags. The requests library is also important as it is the way to reach out to a webpage and extract the entirety of the html page.

# -*- coding: utf-8 -*-
"""
Created on Sat Oct 16 22:36:11 2021
@author: RicardoLezama.com
"""
import requests
artist = requests.get("https://www.letras.com").text

The line `’requests.get(“https://letras.com”).text` does what the attribute ‘text’ implies; the call obtains the HTML files content and makes it available within the python program. Adding a function definition helps group this useful content together.

Functions For WebScraping

Creating a bs4 object is easy enough. Add the link reference as a first argument, then parse each one of these lyric pages on DIV. In this case, link=”letras.com” is the argument to pass along for the function. The function lyrics_url returns all the div tags with a particular class value. That is the text that contains the artists landing page, which itself can be parsed for available lyrics.



def lyrics_url(web_link):
    """
    This helps create a BS4 object. 
    
    Args: web_link containing references. 
    
    return: text with content. 
    """
    artist = requests.get(web_link).text
    check_soup = BeautifulSoup(artist, 'html.parser')
    return check_soup.find_all('div', class_='cnt-letra p402_premium')
    
letras.com the highlight portion is contained within <div> tag.

The image above shows the content within a potential argument for lyrics_url “https://www.letras.com/jose-jose/135222/”. See the github repository for more details.

Organizing Content

Drilling down to a specific artist requires basic knowledge of how Letras.com is set-up for organizing songs into a artists home page. The method artists_songs_url involves parsing through the entirety of a given artists song lists and drilling down further into the specific title.

In the main statement, we can call all these functions to loop through and iterate through the artists page and song functions to generate unique files, names for each song and its lyrics. The function generate_text will write into each individual one set of lyrics. Later, for Gensim, we can turn each lyrics file into a single coherent gensim list.



def artist_songs_url(web_link):
    """
    This helps land into the URL's of the songs for an artist.'
    
    Args: web link is the 
    
    Return songs from https://www.letras.com/gru-;/
    """
    artist = requests.get(web_link).text
    print("Status Code", requests.get(web_link).status_code)
    check_soup = BeautifulSoup(artist, 'html.parser') 
    songs = check_soup.find_all('li', class_='cnt-list-row -song')
    return songs
#@ div class="cnt-letra p402_premium

def generate_text(url):
    import uuid 
    songs = artist_songs_url(url)
    for a in songs:
        song_lyrics = lyrics_url(a['data-shareurl'])
        print (a['data-shareurl'])
        new_file = open(str(uuid.uuid1()) +'results.txt', 'w', encoding='utf-8')
        new_file.write(str(song_lyrics[0]))
        new_file.close()
        print (song_lyrics)
    return print ('we have completed the download for ', url )


def main():
    artistas = open('artistas', 'r', encoding='utf-8').read().splitlines()
    url = 'https://www.letras.com/'
    for a in artistas : 
        generate_text(url + a +"/")
        print ('done')
#once complete, run copy *results output.txt to consolidate lyrics into a single page. 


if __name__ == '__main__':
    sys.exit(main())  # 

Another Zero Day Exploit For Microsoft

Even Windows 11 is affected.

Apparently, one can open a command line window and deploy an exploit to raise permissions on a machine using a .exe file freely available on Github. Nice.

The exploit works on Windows 10, Windows 11 and Windows Server versions of this OS. The exploit consists of a low privileged user raising their own privileges by running basic commands on the CMD prompt. Fascinating.

Bleeping Computer Blog Finds Exploit

The exact issue is described by BleepingComputer yesterday in a much circulated blog post:

[BP] has tested the exploit and used it to open to command prompt with SYSTEM privileges from an account with only low-level ‘Standard’ privileges.

– Bleeping Computer

Word2Vec Mexican Spanish Model: Lyrics, News Documents

A Corpus That Contains Colloquial Lyrics & News Documents For Mexican Spanish

This experimental dataset was developed by 4 Social Science specialists and one industry expert, myself, with different samples from Mexico specific news texts and normalized song lyrics. The intent is to understand how small, phrase level constituents will interact with larger, editorialized style text. There appears to be no ill-effect with the combination of varied texts.

We are working on the assumption that a single song is a document. A single news article is a document too.

In this post, we provide a Mexican Spanish Word2Vec model compatible with the Gensim python library. The word2vec model is derived from a corpus created by 4 research analysts and myself. This dataset was tagged at the document level for the topic of ‘Mexico’ news. The language is Mexican Spanish with an emphasis on alternative news outlets.

One way to use this WVModel is shown here: scatterplot repo.

Lemmatization Issues

We chose to not lemmatize this corpus prior to including in the word vector model. The reason is two-fold: diminished performance and prohibitive runtime length for the lemmatizer. It takes close to 8 hours for a Spacy lemmatizer to run through the entire set of sentences and phrases. Instead, we made sure normalization was sufficiently accurate and factored out major stopwords.

Training Example

Below we show some basic examples as to how we would train based on the data text. The text is passed along to the Word2Vec module. The relevant parameters are set, but the user/reader can change as they see fit. Ultimately, this saved W2V model will be saved locally.

In this case, the named W2Vec model “Mex_Corona_.w2v” is a name that will be referenced down below in top_5.py.

from gensim.models import Word2Vec, KeyedVectors

important_text = normalize_corpus('C:/<<ZYZ>>/NER_news-main/corpora/todomexico.txt')

#Build the model, by selecting the parameters.
our_model = Word2Vec(important_text, vector_size=100, window=5, min_count=2, workers=20)
#Save the model
our_model.save("Mex_Corona_.w2v")
#Inspect the model by looking for the most similar words for a test word.
#print(our_model.wv.most_similar('mujeres', topn=5))
scatter_vector(our_model, 'Pfizer', 100, 21) 

Corpus Details

Specifically, from March 2020 to July 2021, a group of Mexico City based research analysts determined which documents were relevant to this Mexico news category. These analysts selected thousands of documents, with about 1200 of these documents at an average length of 500 words making its way to our Gensim language model. Additionally, the corpus contained here is made out of lyrics with Chicano slang and colloquial Mexican speech.

We scrapped the webpages of over 300 Mexican ranchero and norteño artists on ‘https://letras.com‘. These artists ranged from a few dozen composers in the 1960’s to contemporary groups who code-switch due to California or US Southwest ties. The documents tagged as news relevant to the Mexico topic were combined with these lyrics with around 20 of the most common stopword removed. This greatly reduced the size of the original corpus while also increasing the accuracy of the word2vec similarity analysis.

In addition to the stop word removal, we also conducted light normalization. This was restricted to finding colloquial transcriptions and converting these to orthographically correct versions on song lyrics.

Normalizing Spanish News Data

Large corporations develop language models under guidance of product managers whose life experiences do not reflect that of users. In our view, there is a chasm between the consumer and engineer that underscores the need to embrace alternative datasets. Therefore, in this language model, we aimed for greater inclusion. The phrases are from a genre that encodes a rich oral history with speech commonly used amongst Mexicans in colloquial settings.

Song Lyrics For Colloquial Speech

This dataset contains lyrics from over 300 groups. The phrase length lyrics have been normalized to obey standard orthographic conventions. It also contains over 1000 documents labeled as relevant to Mexico news.

Coronavirus and similar words.

Github Lyrics Gensim Model

We have made the lyrics and news language model available. The model is contained here alongside some basic normalization methods on a module.

Colloquial Words

The similarity scores for a word like ‘amor’ (love) is shown below. In our colloquial/lyrics language model, we can see how ‘corazon’ is the closest to ‘amor’.

print(our_model.wv.most_similar('amor', topn=1))
[('corazon', 0.8519232869148254)]

Let’s try to filter through the most relevant 8 results for ‘amor’:

scatter_vector('mx_lemm_ner-unnorm_1029_after_.w2v', 'amor', 100, 8)
Out[18]: 
[('corazon', 0.8385680913925171),
 ('querer', 0.7986088991165161),
 ('jamas', 0.7974023222923279),
 ('dime', 0.788547158241272),
 ('amar', 0.7882217764854431),
 ('beso', 0.7817134857177734),
 ('adios', 0.7802879214286804),
 ('feliz', 0.7777709364891052)]

For any and all inquiries, please send me a linkedin message here: Ricardo Lezama. The word2vec language model file is right here: Spanish-News-Colloquial.

Here is the scatterplot for ‘amor’:

Scatterplot for ‘amor’.

Diversity Inclusion Aspect – Keyterms

Visualizing the data is fairly simple. The scatterplot method allows us to show which terms surface in similar contexts.

Diversity in the context of Mexican Spanish news text. Query: “LGBT”

Below, I provide an example of how to call the Word2Vec model. These Word2Vec documents are friendly to the Word2Vec modules.

from gensim.models import Word2Vec, KeyedVectors
coronavirus_mexico = "mx_lemm_ner-unnorm_1029_after_.w2v"
coronavirus = "coronavirus-norm_1028.w2v"
wv_from_text = Word2Vec.load(coronavirus)

#Inspect the model by looking for the most similar words for a test word.
print(wv_from_text.wv.most_similar('dosis', topn=5))
#Let us see what the 10-dimensional vector for 'computer' looks like.

Semantic Similarity & Visualizing Word Vectors

Introduction: Two Views On Semantic Similarity

In Linguistics and Philosophy of Language, there are various methods and views on how to best describe and justify semantic similarity. This tutorial will be taken as a chance to lightly touch upon very basic ideas in Linguistics. We will introduce in a very broad sense the original concept of semantic similarity as it pertains to natural language.

Furthermore, we will see how the linguistics view is drastically different from the state of the art Machine Learning techniques. I offer no judgments on why this is so. It’s just an opportunity to compare and contrast passively. Keeping both viewpoints in mind during an analysis is helpful. Ultimately, it maximizes our ability to understand valid Machine Learning output.

The Semantic Decomposition View

There is a compositional view that in its earliest 19th century incarnation is attributable to Gottleb Frege, in which the meaning of terms can be decomposed into simpler components such that the additive process of combining them yields a distinct meaning. Thus, two complex meanings may be similar to one another if they are composed of the same elements.

For example, the meaning of ‘king’ could be construed as an array of features, like the property of being human, royalty and male. Under this reasoning, the same features would carry over to describe ‘queen’, but the decomposition of the word would replace male with female. Thus, in the descriptive and compositional approach mentioned, categorical descriptions are assigned to words whereby decomposing a word reveals binary features for ‘human’, ‘royalty’ and ‘male’. Breaking down concepts represented by words into simpler meanings is what is meant with ‘feature decomposition’ in a semantic and linguistic context.

The Shallow Similarity View

Alternatively, Machine Learning approaches to semantic similarity involves a contextual approach towards the description of a word. In Machine Learning approaches, there is an assignment of shared indices between words. The word ‘king’ and ‘queen’ will appear in more contexts that are similar to one another than other words. In contrast, the words ‘dog’ or ‘cat’, which implies that they share more in common. Intuitively, we understand that these words have more in common due to their usage in very similar contexts. The similarity is represented as a vector in a graph. Each word can have closely adjacent vectors reflecting their similar or shared contexts.

Where both approaches eventually converge is in the ability for the output of a semantic theory or vector driven description of words matches with language users intuitions. In this tutorial and series of examples, we will observe how the Word2Vec module does fairly well with new, recent concepts that only recently appeared in mass texts. Furthermore, these texts are in Mexican Spanish, which implies that the normalization steps are unique to these pieces of unstructured data.

Working With Mexican Spanish In Word2Vec

In this series of python modules, I created a vector model from a Mexican Spanish news corpus. Each module has a purpose: normalization.py cleans text so that it can be interpretable for Word2Vec. Normalization also produces the output lists necessary to pass along to Gensim. Scatterplot.py visualizes the vectors.from the model. This corpus was developed as described below.

This dataset of 4000 documents is verified as being relevant to several topics. For this tutorial, there are three relevant topics: {Mexico, Coronavirus, and Politics}. Querying the model for words in this pragmatic domain is what is most sensible. This content exists in the LaCartita Db and is annotated by hand. The annotators are a group of Mexican graduates from UNAM and IPN universities. A uniform consensus amongst the news taggers was required for its introduction into the set of documents. There were 3 women and 1 man within the group of analysts, with all of them having prior experience gathering data in this domain.

While the 4000 Mexican Spanish news documents analyzed are unavailable on github, a smaller set is provided on that platform for educational purposes. Under all conditions, the data was tokenized and normalized with a set of Spanish centric regular expressions.

Please feel free to reach out to that group in research@lacartita.com for more information on this hand-tagged dataset.

Normalization in Spanish

There are three components to this script: normalizing, training a model and visualizing the data points within the model. This is why we have SkLearn and Matplotlib for visualization, gensim for training and custom python for normalization. In general, the pipeline cleans data, organizes it into a list of lists format that works for the Word2Vec module and trains a model. I’ll explain how each of those steps is performed below.

The normalize_corpus Method

Let’s start with the normalization step which can be tricky given the fact that the dataset can sometimes present diacritics or characters not expected in English. We developed a regular expression that permits us to search and find all the valid text from the Mexican Spanish dataset.

from gensim.models import Word2Vec

import numpy as np 
 
import re 

from sklearn.manifold import TSNE

import matplotlib.pyplot as plt

plt.style.use('ggplot')
   
def normalize_corpus(raw_corpus):
    """
    This function reads clean text. There is a read attribute for the text. 
 
    Argument: a file path that contains a well formed txt file.
    
    Returns: This returns a 'list of lists' format friendly to Gensim. Depending on the size of the  
    ""
    raw_corpus = open(raw_corpus,'r', encoding='utf-8').read().splitlines()
    #This is the simple way to remove stop words
    formatted_sentences=[]
    for sentences in raw_corpus:
        a_words = re.findall(r'[A-Za-z\-0-9\w+á\w+\w+é\w+\w+í\w+\w+ó\w+\w+ú\w+]+', sentences.lower())         
        formatted_sentences.append(a_words)
    return formatted_sentences

important_text = normalize_corpus(<<file-path>>)

Once we generate a list of formatted sentences, which consists of lists of lists containing strings (a single list is a ‘document’), we can use that total set of lists as input for a model. Building the model is likely the easiest part, but formatting the data and compiling it in a usable manner is the hardest. For instance, the document below is an ordered, normalized and tokenized list of strings from this Mexican Spanish News corpus. Feel free to copy/paste in case you want to review the nature of this document:

['piden', 'estrategia', 'inmediata', 'para', 'capacitar', 'policías', 'recientemente', 'se', 'han', 'registrado', 'al', 'menos', 'tres', 'casos', 'de', 'abuso', 'de', 'la', 'fuerza', 'por', 'parte', 'de', 'elementos', 'policiales', 'en', 'los', 'estados', 'de', 'jalisco', 'y', 'en', 'la', 'ciudad', 'de', 'méxico', 'el', 'economista', 'organizaciones', 'sociales', 'coincidieron', 'en', 'que', 'la', 'relación', 'entre', 'ciudadanos', 'y', 'policías', 'no', 'debe', 'ser', 'de', 'adversarios', 'y', 'las', 'autoridades', 'tanto', 'a', 'nivel', 'federal', 'como', 'local', 'deben', 'plantear', 'una', 'estrategia', 'inmediata', 'y', 'un', 'proyecto', 'a', 'largo', 'plazo', 'para', 'garantizar', 'la', 'profesionalización', 'de', 'los', 'mandos', 'policiacos', 'con', 'apego', 'a', 'los', 'derechos', 'humanos', 'recientemente', 'se', 'han', 'difundido', 'tres', 'casos', 'de', 'abuso', 'policial', 'el', 'primero', 'fue', 'el', 'de', 'giovanni', 'lópez', 'quien', 'fue', 'asesinado', 'en', 'jalisco', 'posteriormente', 'la', 'agresión', 'por', 'parte', 'de', 'policías', 'capitalinos', 'contra', 'una', 'menor', 'de', 'edad', 'durante', 'una', 'manifestación', 'y', 'el', 'tercero', 'fue', 'el', 'asesinato', 'de', 'un', 'hombre', 'en', 'la', 'alcaldía', 'coyoacán', 'en', 'la', 'cdmx', 'a', 'manos', 'de', 'policías', 'entrevistada', 'por', 'el', 'economista', 'la', 'presidenta', 'de', 'causa', 'en', 'común', 'maría', 'elena', 'morera', 'destacó', 'que', 'en', 'ningún', 'caso', 'es', 'admisible', 'que', 'los', 'mandos', 'policiales', 'abusen', 'de', 'las', 'y', 'los', 'ciudadanos', 'y', 'si', 'bien', 'la', 'responsabilidad', 'recae', 'sobre', 'el', 'uniformado', 'que', 'actúa', 'las', 'instituciones', 'deben', 'garantizar', 'la', 'profesionalización', 'de', 'los', 'elementos', 'los', 'policías', 'son', 'un', 'reflejo', 'de', 'la', 'sociedad', 'a', 'la', 'que', 'sirven', 'y', 'ello', 'refleja', 'que', 'hay', 'una', 'sociedad', 'sumamente', 'violenta', 'y', 'ellos', 'también', 'lo', 'son', 'y', 'no', 'lo', 'controlan', 'declaró', 'que', 'más', 'allá', 'de', 'que', 'el', 'gobernador', 'de', 'jalisco', 'enrique', 'alfaro', 'y', 'la', 'jefa', 'de', 'gobierno', 'de', 'la', 'cdmx', 'claudia', 'sheinbaum', 'condenen', 'los', 'hechos', 'y', 'aseguren', 'que', 'no', 'se', 'tolerará', 'el', 'abuso', 'policial', 'deben', 'iniciar', 'una', 'investigación', 'tanto', 'a', 'los', 'uniformados', 'involucrados', 'como', 'a', 'las', 'fiscalías', 'sobre', 'las', 'marchas', 'agregó', 'que', 'si', 'bien', 'las', 'policías', 'no', 'pueden', 'lastimar', 'a', 'las', 'personas', 'que', 'ejercen', 'su', 'derecho', 'a', 'la', 'libre', 'expresión', 'dijo', 'que', 'hay', 'civiles', 'que', 'no', 'se', 'encuentran', 'dentro', 'de', 'los', 'movimientos', 'y', 'son', 'agredidos', 'es', 'importante', 'decir', 'quién', 'está', 'tras', 'estas', 'manifestaciones', 'violentas', 'en', 'esta', 'semana', 'vimos', 'que', 'no', 'era', 'un', 'grupo', 'de', 'mujeres', 'luchando', 'por', 'sus', 'derechos', 'sino', 'que', 'fueron', 'grupos', 'violentos', 'enviados', 'a', 'generar', 'estos', 'actos', 'entonces', 'es', 'necesario', 'definir', 'qué', 'grupos', 'políticos', 'están', 'detrás', 'de', 'esto', 'puntualizó', 'el', 'coordinador', 'del', 'programa', 'de', 'seguridad', 'de', 'méxico', 'evalúa', 'david', 'ramírez', 'de', 'garay', 'dijo', 'que', 'las', 'autoridades', 'deben', 'de', 'ocuparse', 'en', 'plantear', 'una', 'estrategia', 'a', 'largo', 'plazo', 'para', 'que', 'las', 'instituciones', 'de', 'seguridad', 'tengan', 'la', 'estructura', 'suficiente', 'para', 'llevar', 'a', 'cabo', 'sus', 'labores', 'y', 'sobre', 'todo', 'tengan', 'como', 'objetivo', 'atender', 'a', 'la', 'ciudadanía', 'para', 'generar', 'confianza', 'entre', 'ellos', 'desde', 'hace', 'muchos', 'años', 'no', 'vemos', 'que', 'la', 'sociedad', 'o', 'los', 'gobiernos', 'federales', 'y', 'locales', 'tomen', 'en', 'serio', 'el', 'tema', 'de', 'las', 'policías', 'y', 'la', 'relación', 'que', 'tienen', 'con', 'la', 'comunidad', 'lo', 'que', 'estamos', 'viviendo', 'es', 'el', 'gran', 'rezago', 'que', 'hemos', 'dejado', 'que', 'se', 'acumule', 'en', 'las', 'instituciones', 'de', 'seguridad', 'indicó', 'el', 'especialista', 'apuntó', 'que', 'además', 'de', 'la', 'falta', 'de', 'capacitación', 'las', 'instituciones', 'policiales', 'se', 'enfrentan', 'a', 'la', 'carga', 'de', 'trabajo', 'la', 'falta', 'de', 'protección', 'social', 'de', 'algunos', 'uniformados', 'la', 'inexistencia', 'de', 'una', 'carrera', 'policial', 'entre', 'otras', 'deficiencias', 'la', 'jefa', 'de', 'la', 'unidad', 'de', 'derechos', 'humanos', 'de', 'amnistía', 'internacional', 'méxico', 'edith', 'olivares', 'dijo', 'que', 'la', 'relación', 'entre', 'policías', 'y', 'ciudadanía', 'no', 'debe', 'ser', 'de', 'adversarios', 'y', 'enfatizó', 'que', 'es', 'necesario', 'que', 'las', 'personas', 'detenidas', 'sean', 'entregadas', 'a', 'las', 'autoridades', 'correspondientes', 'para', 'continuar', 'con', 'el', 'proceso', 'señaló', 'que', 'este', 'lapso', 'es', 'el', 'de', 'mayor', 'riesgo', 'para', 'las', 'personas', 'que', 'son', 'detenidas', 'al', 'tiempo', 'que', 'insistió', 'en', 'que', 'las', 'personas', 'encargadas', 'de', 'realizar', 'detenciones', 'deben', 'tener', 'geolocalización', 'no', 'observamos', 'que', 'haya', 'una', 'política', 'sostenida', 'de', 'fortalecimiento', 'de', 'los', 'cuerpos', 'policiales', 'para', 'que', 'actúen', 'con', 'apego', 'a', 'los', 'derechos', 'humanos', 'lo', 'otro', 'que', 'observamos', 'es', 'que', 'diferentes', 'cuerpos', 'policiales', 'cuando', 'actúan', 'en', 'conjunto', 'no', 'necesariamente', 'lo', 'hacen', 'de', 'manera', 'coordinada']

We build the model with just a few lines of python code once the lists of lists are contained in an object. The next step is to provide these lists as the argument of the Word2Vec in the object important_text. The Word2Vec module has a few relevant commands and arguments, which I will not review in depth here.

from gensim.models import Word2Vec

important_text = normalize_corpus(<<file-path>>)

mexican_model = Word2Vec(important_text, vector_size=100, window=5, min_count=5, workers=10)

mexican_model.save("NewMod1el.w2v")

The scatterplot Method: Visualizing Data

The scatter plot method for vectors allows for quick visualization of similar terms. The scatterplot function uses as an argument a model that contains all the vector representations of the Spanish MX content.

def scatter_vector(modelo, palabra, size, topn):
    """ This scatter plot for vectors allows for quick visualization of similar terms. 
    
    Argument: a model containing vector representations of the Spanish MX content. word
    is the content you're looking for in the corpus.
    
    Return: close words    
    """
    arr = np.empty((0,size), dtype='f')
    word_labels = [palabra]
    palabras_cercanas = modelo.wv.similar_by_word(palabra, topn=topn)
    arr = np.append(arr, np.array([modelo.wv[palabra]]), axis=0)
    for wrd_score in palabras_cercanas:
        wrd_vector = modelo.wv[wrd_score[0]]
        word_labels.append(wrd_score[0])
        arr = np.append(arr, np.array([wrd_vector]), axis=0)
    tsne = TSNE(n_components=2, random_state=0)
    np.set_printoptions(suppress=True)
    Y = tsne.fit_transform(arr)
    x_coords = Y[:, 0]
    y_coords = Y[:, 1]
    plt.scatter(x_coords, y_coords)
    for label, x, y in zip(word_labels, x_coords, y_coords):
        plt.annotate(label, xy=(x, y), xytext=(0, 0), textcoords='offset points')
    plt.xlim(x_coords.min()+0.00005, x_coords.max()+0.00005)
    plt.ylim(y_coords.min()+0.00005, y_coords.max()+0.00005)
    plt.show()
    return palabras_cercanas

scatter_vector(modelo, 'coronavirus', 100, 21)

Coronavirus Word Vectors

The coronavirus corpus contained here is Mexico centric in it’s discussions. Generally, it was sourced from a combination of mainstream news sources, like La Jornada, and smaller digital only press, like SemMexico.

We used ‘Word2Vec’ to develop vector graph representations of words. This will allow us to rank the level of similarity between words with a number between 0 and 1. Word2Vec is a python module for indexing the shared context of words and then representing each as a vector/graph. Each vector is supposed to stand-in as a representation of meanning proximity based on word usage. We used Word2Vec to develop a semantic similarity representation for Coronavirus terminology within news coverage.

In this set of about 1200 documents, we created a vector model for key terms in the document; the printed results below show how related the other words are related to our target word ‘coronavirus‘. The most similar term was ‘covid-19’, virus and a shortening ‘covid’. The validity of these results were obvious enough and indicate that our document set contains enough content to represent our intuitions of this topic.

[('covid-19', 0.8591713309288025),
('virus', 0.8252751231193542),
('covid', 0.7919320464134216),
('sars-cov-2', 0.7188869118690491),
('covid19', 0.6791930794715881),
('influenza', 0.6357837319374084),
('dengue', 0.6119976043701172),
('enfermedad', 0.5872418880462646),
('pico', 0.5461580753326416),
('anticuerpos', 0.5339271426200867),
('ébola', 0.5207288861274719),
('repunte', 0.520190417766571),
('pandémica', 0.5115000605583191),
('infección', 0.5103719234466553),
('fumigación', 0.5102646946907043),
('alza', 0.4952083230018616),
('detectada', 0.4907490015029907),
('sars', 0.48677393794059753),
('curva', 0.48023557662963867),
('descenso', 0.4770597517490387),
('confinamiento', 0.4769912660121918)]
The word ‘coronavirus’ in Mexican Spanish text and its adjacent word vectors.

One of the measures for the merit of a large machine learning model is if the output aligns with the intuition of a human judgement. This implies that we should ask ourselves if the topmost ranked ‘similar’ words presented by this word2vec model matches up with our psychological opinion of ‘coronavirus’. Overwhelmingly, the answer is ‘yes’, since Covid and Covid19 nearly always mean the same thing, without a hyphen or if referenced as just ‘virus’ in some texts’.

Strong Normalization Leads To Better Vectors

Better normalization leads to better vectors.

This is verifiable in a scatterplot comparing the distinct text normalization that one intuits is best upon analyzing initial training data.

For example, many place names are effectively compound words or complex strings which can lead to misleading segmentation. This adds noise, effectively misaligning other words in the word vector model. Therefore, finding a quick way to ensure place names are represented accurately helps other unrelated terms surface away from their vector representation. Consider this below scatterplot where the names ‘baja california sur’ and ‘baja california’ are not properly tokenized:

Bad Segmentation Caused By Incomplete Normalization

Replacing the spaces between ‘Baja California Sur’, ‘Baja California’, and ‘Sur de California’, allows for other place names that pattern similarly to shine through in the scatterplot. This reflects more accurate word vector representations.

A better graph from replacing Baja California Sur to ‘bajacaliforniasur’ is a better way to capture the state name.

Getting Started With The Command Line

First of all, what is a ‘command line’? Visually, within a Microsoft Operating System, the command line looks like this:

Pictured is a command line. You can access this by searching ‘CMD’ on your windows search bar.

You can access the command line by typing ‘CMD’ on your Windows Desktop.

As the name implies, a command line is an interface where a user inputs literal commands to accomplish a computing task. The same or similar amount of tasks are often possible via mouse clicks, keyboard commands outside of the CMD screen, etc. Prior to the mouse, the command line was the primary mode of interacting with a computer in the 1970’s prior to the invention of the mouse, point and click method for accessing files in a computer. You may reasonably ask: ‘If people can accomplish basic computing tasks with a simple mouse click and scroll of the screen, then why would anyone use the command line’? The answer is that in modern computing, heavier or more complex tasks can be accomplished more easily by providing specific instructions that can not easily be accomplished with mouse clicks.

Let’s explore a simple and commonly used command to just get started in the command line.

The MV command

The MV command, an often used command for server administrators and grunt level programmers (like me), can help move files around and between computers. Think of programming at this level as steps or tasks accomplished. Individually, this command seems insignificant, but often times, commands are used in harmony with other commands.

Like I alluded to above in the introduction, there is some parity between the tasks accomplished with a mouse and the command line. They are both a type of interface with the computer/server.

For example, one could drag and drop a series of files, one by one, into a different folder based on some criteria. However, let’s say you have a large set of files with some common denominator in terms of text or naming conventions. While you could continue to drag and drop, there are moments in which there are simply too many files to easily view within a screen.

This simple command called ‘mv’ or short for move permits you to move files within a command line interface to other directories.

mv FILENAME somedirectory/

In general, first you type the command type, in this case ‘mv‘, then to the right of that command you specify which files you will move.

mv FILENAME ZIP/

Below is a screenshot of a real world example, a screenshot of a directory that contains these files and the directory where I want to place these files. You may have noticed that there is a ‘*‘ symbol right before zip and then the directory ZIP.

The ‘*‘ symbol is called the Kleene star or wildcard character. The wildcard character matches any character and any number of characters simultaneously. This * character tells the computer to search for a filename with any number and type of strings prior to the string I specified, ‘zip’. It will therefore move all the zip files into the ZIP/ directory.

‘MV’ command where we specify the movement of zip files into ZIP

Here is the literal codeblock:

mv *zip ZIP/

As you can see, the file names that can be moved are ‘g2p-seq2seq-master.zip’, ‘spa-eng.zip’, ‘spanish_g2p.zip’ and ‘NER_news-main.zip’. Its a lot a easier to just type *zip and go about your business that away. Imagine if instead of 4 files it was 400 files you needed to move. In that scenario, therein lies the utility of such a simple command that can “catch-all” the filenames.

Please note, you can also type the file names individually. Here is an example:

mv 'g2p-seq2seq-master.zip' 'spa-eng.zip' 'spanish_g2p.zip' 'NER_news-main.zip' ZIP/

Hopefully, this brief overview of how to use the ‘MV’ command is helpful. Feel free to reach out at lezama@lacartita.com with any questions. Thanks!

Leveraging NVIDIA Downloads

An issue during the installation of TensorFlow in the Anaconda Python environment is an error message citing the lack of a DLL file. Logically, you will also receive the same error for invoking any Spacy language models, which need TensorFlow installed properly.

Thus, running the code below will invoke an error message without the proper dependencies installed:

import spacy
import spacy.attrs
nlp = spacy.load('es_core_news_sm')

The error message below will appear if the NVIDIA GPU Developer kit is not installed:

"W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found"
"I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine."

The issue is the lack of a GPU developer kit from NVIDIA.

CUDA Toolkit 11.4 Update 1 Downloads | NVIDIA Developer

Frequency Counts For Named Entities Using Spacy/Python Over MX Spanish News Text

On this post, we review some straightforward code written in python that allows a user to process text and retrieve named entities alongside their numerical counts. The main dependencies are Spacy, a small compact version of their Spanish language model built for Named Entity Recognition and the tabular data processing library, Matplotlib, if you’re looking to further structure the data.

Motivation(s)

Before we begin, it may be relevant to understand why we would want to extract these data points in the first place. Often times, there is a benefit to quickly knowing what named entity a collection (or even a hadoop sized bucket) of stories references. For instance, one of the benefits is the quick ability to visualize the relative importance of an entity to these stories without having to read all of them.

Even if done automatically, the process of Named Entity Recognition is still guided by very basic principles, I think. For instance, the very basic reasoning surrounding a retelling of events for an elementary school summary applies to the domain of Named Entity Recognition. That is mentioned below in the Wh-question section of this post.

Where did you get this data?

Another important set of questions is what data are we analyzing and how did we gather this dataset?

Ultimately, a great number of computational linguists or NLP practitioners are interested in compiling human rights centered corpora to create tools that analyze newsflow quickly on these points. When dealing with sensitive topics, the data has to center on those related topics for at-need populations. This specific dataset centers around ‘Women/Women’s Issues’ as specified by the group at SugarBearAI.

As for ‘where did they obtain this data?, that question is answered as follows: this dataset of six hundred articles existing in the LaCartita Db – which contains several thousand hand-tagged articles – is annotated by hand. The annotators are a group of Mexican graduates from UNAM and IPN universities. A uniform consensus amongst the news taggers was required for its introduction into the set of documents. There were 3 women and 1 man within the group of analysts, with all of them having prior experience gathering data in this domain.

While the six hundred Mexican Spanish news headlines analyzed are unavailable on github, a smaller set is provided on that platform for educational purposes. Under all conditions, the data was tokenized and normalized with a fairly sophisticated set of Spanish centric regular expressions.

Please feel free to reach out to that group in research@lacartita.com for more information on this hand-tagged dataset.

Wh-Questions That Guide News Judgments

With all of the context on data and motivations in mind, we review some points on news judgment that can help with the selection of texts for analysis and guide the interpretation of automatically extracted data points.

Basic news judgment is often informed by the following Wh-questions:

1.) What occurred in this news event? (Topic Classification; Event Extraction)

2.) Who was involved in the news?

3.) When did this news event take place?

4.) Where did it take place?

5.) Why did this take place?

If you think of these questions at a fairly high level of abstraction, then you’ll allow me to posit that the first two questions are often the domain of Topic Classification and Named Entity Recognition, respectively. This post will deal with the latter, but assume that this issue of extracting named entities deals with documents already organized on the basis of some unifying topic. This is why it’s useful to even engage in the activity.

In other words, you – the user of this library – will be open to providing a collection of documents already organized under some concept/topic. You would be relying on your knowledge of that topic to make sense of any frequency analysis of named entities, important terms (TF-IDF) etc. as is typical when handling large amounts of unstructured news text. These concepts – NER and TFIDF – are commonly referenced in Computational Linguistics and Information Retrieval; they overlap in applied settings frequently. For instance, TF-IDF and NER pipelines power software applications that deal in summarizing complex news events in real time. So, it’s important to know that there are all sorts of open source libraries that handle these tasks for any average user or researcher.

Leveraging Spacy’s Lightweight Spanish Language Model

The actual hard work involves identifying distinct entities; the task of identifying Named Entities involves statistical processes that try to generalize what a typical Named Entity’s morphological shape in text involves.

In this example, my particular script is powered by the smaller language model from Spacy. One thing that should be noted is that the text has its origins in Wikipedia. This means that newer contemporary types of text may not be sufficiently well covered – breadth doesn’t imply depth in analysis. Anecdotally, over this fairly small headline-only corpus sourced by hand with UNAM and IPN students that contains text on the Mexican president, Andres Manuel Lopez Obrador, Covid and local crime stories, we see performance below 80 percent accuracy from the small Spanish language model. Here’s the small 600 headline strong sample: example headlines referenced.

NER_News Module

Using the below scripts, you can extract persons and organizations. Using spacy, you can extract the entities extracted from a corpus. We will use the lighter Spanish language model from Spacy’s natural language toolkit. This post assumes that you’ve dealt with the basics of Spacy installation alongside its required models. If not, visit here. Therefore, we should expect the below lines to run without problem:

import spacy
import spacy.attrs
nlp = spacy.load('es_core_news_sm')

In this example, we use clean texts that are ‘\n’ (“new line separated”) separated texts. We count and identify the entities writing to either memory or printing to console the results of the NER process. The following line contains the first bit of code referencing Spacy and individuates each relevant piece of the text. Suppose these were encyclopedic or news articles, then the split would probably capture paragraph or sentence level breaks in the text:

raw_corpus = open('corpora/titularesdemx.txt','r', encoding='utf-8').read().split("\n")[1:]

The next step involves placing NER text with its frequency count as a value in a dictionary. This dictionary will be result of running our ‘sacalasentidades’ method over the raw corpus. The method extracts GEO-political entities, like a country, or PER-tagged entities, like a world leader.

import spacy
import spacy.attrs
nlp = spacy.load('es_core_news_sm')

import org_per
raw_corpus = open('corpora/titularesdemx.txt','r', encoding='utf-8').read().split("\n")[1:]
entities = org_per.sacalasentidades(raw_corpus)
 
# use list of entities that are ORG or GEO and count up each invidividual token.     

tokensdictionary = org_per.map_entities(entities) 

The object tokensdictionary formatted output will look like this:

{'AMLO': 11,
 'Desempleo': 1,
 'Perú': 1,
 'América Latina': 3,
 'Banessa Gómez': 1,
 'Resistir': 2,
 'Hacienda': 1,
 'Denuncian': 7,
 'Madero': 1,
 'Subastarán': 1,
 'Sánchez Cordero': 4,
 'Codhem': 1,
 'Temen': 2,
 'Redes de Derechos Humanos': 1,
 'Gobernación': 1,
 'Sufren': 1,
 '¡Ni': 1,
 'Exigen': 2,
 'Defensoras': 1,
 'Medicina': 1,
 'Género': 1,
 'Gabriela Rodríguez': 1,
 'Beatriz Gasca Acevedo': 1,
 'Diego "N': 1,
 'Jessica González': 1,
 'Sheinbaum': 3,
 'Esfuerzo': 1,
 'Incendian Cecyt': 1,
 'Secretaria de Morelos': 1,
 'Astudillo': 1,
 'Llaman': 3,
 'Refuerzan': 1,
 'Mujer Rural': 1,
 'Inician': 1,
 'Violaciones': 1,
 'Llama Olga Sánchez Cordero': 1,
 'Fuentes': 1,
 'Refuerza Michoacán': 1,
 'Marchan': 4,
 'Ayelin Gutiérrez': 1,
 'Maternidades': 1,
 'Coloca FIRA': 1,
 'Coloquio Internacional': 1,
 'Ley Olimpia': 3,
 'Toallas': 1,
 'Exhorta Unicef': 1,
 'Condena CNDH': 1,
 'Policías de Cancún': 1,
 'Exposición': 1,
 'Nadia López': 1,
 'Aprueba la Cámara': 1,
 'Patriarcales': 1,
 'Sofía': 1,
 'Crean Defensoría Pública para Mujeres': 1,
 'Friedrich Katz': 1,
 'Historiadora': 1,
 'Soledad Jarquín Edgar': 1,
 'Insuficientes': 1,
 'Wikiclaves Violetas': 1,
 'Líder': 1,
 'Alcaldía Miguel Hidalgo': 1,
 'Ventana de Primer Contacto': 1,
 'Parteras': 1,
 'App': 1,
 'Consorcio Oaxaca': 2,
 'Comité': 1,
 'Verónica García de León': 1,
 'Discapacidad': 1,
 'Cuánto': 1,
 'Conasami': 1,
 'Amnistía': 1,
 'Policía de Género': 1,
 'Parteras de Chiapas': 1,
 'Obligan': 1,
 'Suspenden': 1,
 'Contexto': 1,
 'Clemencia Herrera': 1,
 'Fortalecerán': 1,
 'Reabrirá Fiscalía de Chihuahua': 1,
 'Corral': 1,
 'Refugio': 1,
 'Alicia De los Ríos': 1,
 'Evangelina Corona Cadena': 1,
 'Félix Salgado Macedonio': 5,
 'Gabriela Coutiño': 1,
 'Aída Mulato': 1,
 'Leydy Pech': 1,
 'Claman': 1,
 'Insiste Morena': 1,
 'Mariana': 2,
 'Marilyn Manson': 2,
 'Deberá Inmujeres': 1,
 'Marcos Zapotitla Becerro': 1,
 'Vázquez Mota': 1,
 'Dona Airbnb': 1,
 'Sergio Quezada Mendoza': 1,
 'Incluyan': 1,
 'Feminicidios': 1,
 'Contundente': 1,
 'Teófila': 1,
 'Félix Salgado': 1,
 'Policía de Xoxocotlán': 1,
 'Malú Micher': 1,
 'Andrés Roemer': 1,
 'Basilia Castañeda': 1,
 'Salgado Macedonio': 1,
 'Menstruación Digna': 1,
 'Detenidas': 1,
 'Sor Juana Inés de la Cruz': 1,
 'María Marcela Lagarde': 1,
 'Crean': 1,
 'Será Rita Plancarte': 1,
 'Valparaiso': 1,
 'México': 1,
 'Plataformas': 1,
 'Policías': 1,
 'Karen': 1,
 'Karla': 1,
 'Condena ONU Mujeres': 1,
 'Llaman México': 1,
 'Sara Lovera': 1,
 'Artemisa Montes': 1,
 'Victoria': 2,
 'Andrea': 1,
 'Irene Hernández': 1,
 'Amnistía Internacional': 1,
 'Ley de Amnistía': 1,
 'Nació Suriana': 1,
 'Rechaza Ss': 1,
 'Refugios': 1,
 'Niñas': 1,
 'Fiscalía': 1,
 'Alejandra Mora Mora': 1,
 'Claudia Uruchurtu': 1,
 'Encubren': 1,
 'Continúa': 1,
 'Dulce María Sauri Riancho': 1,
 'Aprueba Observatorio de Participación Política de la Mujer': 1,
 'Plantean': 1,
 'Graciela Casas': 1,
 'Carlos Morán': 1,
 'Secretaría de Comunicaciones': 1,
 'Diego Helguera': 1,
 'Hidalgo': 1,
 'LGBT+': 1,
 'Osorio Chong': 1,
 'Carla Humphrey Jordán': 1,
 'Lorenzo Córdova': 1,
 'Edomex': 1,
 'CEPAL': 1,
 'Delitos': 1,
 'Murat': 1,
 'Avanza México': 1,
 'Miguel Ángel Mancera Espinosa': 1,
 'Reconoce INMUJERES': 1,
 'Excluyen': 1,
 'Alejandro Murat': 1,
 'Gómez Cazarín': 1,
 'Prevenir': 1,
 'Softbol MX': 1,
 'Martha Sánchez Néstor': 1}

Erros in Spacy Model

One of the interesting errors in the SPACY powered NER process is the erroneous tagging of ‘ Plantean’ as a named entity when, in fact, this string is a verb. Similarly, ‘Delitos’ and ‘Excluyen’ are tagged as ORG or PER tags. Possibly, the morphological shape, orthographic tendency of headlines throws off the small language model. Thus, even with this small test sample, we can see the limits of out-of-the-box open source solutions for NLP tasks. This shows the value added of language analysts, data scientists in organizations dealing with even more specific or specialized texts.

Handling Large Number of Entries On Matplotlib

One issue is that there will be more Named Entities recognized than is useful or even possible to graph.

Despite the fact that we have a valuable dictionary above, we still need to go further and trim down the dictionary in order to figure out what is truly important. In this case, the next Python snippet is helpful in cutting out all dictionary values that contain a frequency count of only ‘1’. There are occasions in which a minimum value must be set.

For instance, suppose you have 1000 documents with 1000 headlines. Your NER analyzer must read through these headlines which ultimately are not a lot of text. Therefore, the minimum count you would like to eliminate is likely to be ‘1’ while if you were analyzing the entirety of the document body, then you may want to raise the minimum threshold for a dictionary value’s frequency.

The following dictionary comprehension places a for-loop type structure that filters out on the basis of the term frequency being anything but ‘1’, the most common frequency. This is appropriate for headlines.

    filter_ones = {term:frequency for term, frequency in data.items() if frequency > 1}

While this dictionary filtering process is better for headlines, a higher filter is needed for body text. 10,000 words or more potentially words implies that the threshold for the minimum frequency value is higher than 10.

    filter_ones = {term:frequency for term, frequency in data.items() if frequency > 10}

The resulting dictionary now presented as a matplotlib figure is shown:

 

def plot_terms_body(topic, data):
    """
    The no.aranage attribution is to understand how to best and programmatically 
    plot the data. Intervals are determined by the counts within the dictionary. 
    
    Args: Topic is the name of the plot/category. The 'data' is  a list of ter 
    
     """
    #The bar plot should be optimized for the max and min size of
    #individual 
    filter_ones = {term:frequency for term, frequency in data.items() if frequency > 10}  
    filtered = {term:frequency for term, frequency in data.items() if frequency > round(sum(filter_ones.values())/len(filter_ones))  }   
    print(round(sum(filtered.values())/len(filtered)), "Average count as result of total terms minus once identified terms divided by all terms.")
    terms = filtered.keys()
    frequency = filtered.values()   
    y_pos = np.arange(len(terms),step=1)
    # min dictionary value, max filtered value ; 
    x_pos = np.arange(min(filtered.values()), max(filtered.values()), step=round(sum(filtered.values())/len(filtered)))
    plt.barh(y_pos, frequency, align='center', alpha=1)
    plt.yticks(y_pos, terms, fontsize=12)
    plt.xticks(x_pos)
    plt.xlabel('Frecuencia en encabezados')
    plt.title(str(topic), fontsize=14)
    plt.tight_layout()
    plt.show()
Named Entities or Frequent Terms

We are able to extract the most common GER or PER tagged Named Entities in a ‘Women’ tagged set of documents sourced from Mexican Spanish news text.

Surprise, surprise, the terms ‘Exigen‘, ‘Llaman‘, ‘Marchan‘ cause problems due to their morphological and textual shape; the term ‘Victoria‘ is orthographically identical and homophonous to a proper names, but in this case, it is not a Named Entities. These false positives in the NER process from Spacy just reflects how language models should be trained over specific texts for better performance. Perhaps, an NER model trained over headlines would fare better. The data was already cleaned due to a collection process detailed below so normalization and tokenization were handled beforehand.

Using Spacy in Python To Extract Named Entities in Spanish

The Spacy Small Language model has some difficulty with contemporary news text that are not either Eurocentric or US based. Likely, this lack of accuracy with contemporary figures owes in part to a less thorough scrape of Wikipedia and relative changes that have taken place in Mexico, Bolivia and other countries with highly variant dialects of Spanish in LATAM since 2018. Regardless, that dataset can and does garner some results for the purpose of this exercise. This means that we can toy a bit around with some publicly available data.

Entity Hash For Spanish Text

In this informal exercise, we will try to hack our way through some Spanish text. Specifically, making use of NER capacities that are sourced from public data – no rule based analysis – with some functions I find useful for visualizing Named Entities in Spanish text. We have prepared a Spanish news text on the topic of ‘violence’ or violent crime sourced from publicly available Spanish news content in Mexico.

Using spacy, you can hash the entities extracted from a corpus. We will use the lighter Spanish language model from Spacy’s natural language toolkit. This language model is a statistical description of Wikipedia’s Spanish corpus which is likely slanted towards White Hispanic speech so beware it’s bias.

First, import the libraries:

import spacy
import spacy.attrs
nlp = spacy.load('es_core_news_sm')

With the libraries in place, we can import the module ‘org_per’. This module is referencing this Github repo.

The work of identifying distinct entities is done in a function that filters for Geographical entities and People. Both of these tags are labeled as ‘GEO’ and ‘PER’, respectively in spacy’s data.

The variable ‘raw_corpus‘ is the argument you provide, which should be some Spanish text data. If you don’t have any, visit the repository and load that file object.

import org_per
raw_corpus = open('corpus_es_noticias_mx.txt','r', encoding='utf-8').read().split("\n")[1:]
entities = org_per.sacalasentidades(raw_corpus)
 
# use list of entities that are ORG or PER and count up
# each invidividual token.     

tokensdictionary = org_per.map_entities(entities) 

As noted before, the text has its origins in Wikipedia. This means that newer more contemporary types of text may not be sufficiently well covered – breadth doesn’t imply depth in analysis because stochastic models rely on some passing resemblance with data that may not ever have been seen.

Anecdotally, over a small corpus, we see performance below 80 percent accuracy for this language model. Presumably, a larger sampling of Wikipedia ES data will perform higher, but certain trends in contemporary news text makes this expectation necessary to temper.

The output returned from running `org_per.map_entities(entities)` will look like this:

{"Bill Clinton": 123,
"Kenneth Starr" : 12,
}

The actual hashing is a simple enough method involving placing NER text with its frequency count as a value in a dictionary. Within your dictionary, you may get parses of Named Entities that are incorrect. That is to say, they are not properly delimited because the Named Entity Language Model does not have an example of your parse. For instance, Lopez Obrador – the current president of Mexico – is not easily recognized as ‘PER’.

Accuracy

This is measured very simply through tabulating how much you agree with the returned Named Entities. The difference between expected and returned values is your error rate. More on accuracy metrics next post.

Introducción a Python

logotipo de Python

Python es un lenguaje de programación bastante flexible y veloz cuando se
considera que es un lenguaje de alto nivel. En este breve resumen del idioma se presenta un ejercicio que abre un archivo. Esta tarea es casi rutinaria en todo trabajo complejo.

Ahora, si buscas un ejercicio más avanzado o uno que simplemente abarca un tema en particular sugiero este enlace . También tengo un proyecto donde identifico las entidades en un texto pero la documentación esta en ingles.

“Los lenguajes de programación de alto nivel se caracterizan porque su estructura semántica es muy similar a la forma como escriben los humanos, lo que permite codificar los algoritmos de manera más natural, en lugar de codificarlos en el lenguaje binario de las máquinas, o a nivel de lenguaje ensamblador.”

UNAM, F. J. (2004). Enciclopedia del lenguaje C. México: Alfaomega/RaMa.

Básicamente quiere decir que Python simplifica operaciones que usan más líneas de código en otros lenguajes de programación dentro de un formato más compacto y amigable. En parte, esto limita la relevancia del idioma para ciertas aplicaciones industriales pero esto no es tan relevante para un programador principiante o avanzado con metas dentro del campo de PLN o Inteligencia Artificial.

Lo importante es que rápidamente puedes llevar a cabo una tarea sobre alguna tarea que sería imposible completar a mano.

Modulos/Librerias de Python

Un modulo o librería en Python es una base de código ya hecho (quizás ya incluida como parte de la instalación de la version de Python) que permite que el usuario lleve a cabo ciertas tareas.

Un módulo es un objeto de Python con atributos con nombres arbitrarios que puede enlazar y hacer referencia.

COVANTEC

El término ‘librería’ o modulo existe para todo idioma de programación.

Este programa usa una sentencia que invoca otro modulo: ‘import’ X es
el patrón. Por ejemplo, import csv implica que el usuario busca importar el modulo de csv que se especializa en abrir o leer un archivo csv.

import csv
#csv una libreria para textos delimitados por ','

Flexibilidad

Bastante se puede llevar acabo con las librerías ya instaladas en cualquier version de Python descargada. Es mas, tambien es posible (pero no recomendable a largo plazo) usar Python sin la estructura de orientación de objeto de manera profunda.

Es decir que no se tienen que definir clases o metodos muy sofisticados y igual se puede derivar un ben uso del idioma.

Es decir que se pueden utilizar los módulos de Python como si fuese un script como el idioma de BASH. Si ninguna de estas oraciones tiene mucho sentido por la falta de contexto, está bien.

Por ahora, es suficiente saber que Python es bastante flexible y fácil de entender. Unas cuantas líneas de código pueden llevar una tarea fácil (pero repetitiva) a una adecuada resolución.

Interpretado – no compilado

Python es un idioma que sin un compilador puede correr y generar un análisis o resultado de algún tipo con mucha facilidad. Por ejemplo, esta interacción demuestra una operación de aritmética llevandose a cabo.

2 + 2 en Python:

Python en Anaconda: Una sesión interactiva donde un programador corre una secuencia de comandos para obtener su resultado final.

Finalmente, recomiendo el uso de Anaconda-Spyder ya que cuenta con mucha documentación y librerías

Spyder es un enviro de desarrollo que esta equipado con los módulos necesarios para un analista de datos junto con buena documentación y accesos a nueva librerías. El grupo de Anaconda actualiza un repositorio con las librerías mas actualizadas y relevantes.

Anaconda está disponible aquí hasta mero abajo de la página.

Sintaxis abrir y cerrar un archivo en Python

Una tarea simple como abrir un archivo es posible con una sola linea de codigo o varias dependiendo en como se guste organizar el codigo.

#NO ES CODIGO: se agrega documentacion o comentarios
file = open('ejemplo.txt', 'r')

#puedes acceder methodos del objeto 'file' con '.'
file.read()

Alternativamente, la sintaxis permite:

file = open('ejemplo.txt', 'r').read()

Tambien existe la posibilidad de agregar argumentos que lean bien el tipo de texto que se busque leer ya que hay codecs distintos de acuerdo con los caracteres de un idioma. Nota que no hemos invocado ningun tipo de modulo para estas operaciones.

file = open('ejemplo.txt', 'r', encoding='utf-8').read()

Importando librerías ya preinstaladas: regular expressions

Una librería ya instalada en Python es la de ‘regular expressions‘ denominada simplemente “re” y que se importa de la siguiente manera.

import re 

La librería puede invocar el metodo de substitución com un atributo. Es decir con el símbolo ‘.’ adjunto al modulo ‘re’.

objeto = re.sub('abc', 'cba', 'este es el texto que se manipula abc')

El ejemplo tiene como primer argumento de re.sub( …) a el hilario que se buscar reemplazar y el segundo argumento con que se busca reemplazar. Finalmente, el tercer argumento hace referencia al texto que buscar manipular: ‘este es el texto que se manipula abc’.