Dependency Parsing in NLP

Standard

Syntactic Parsing or Dependency Parsing is the task of recognizing a sentence and assigning a syntactic structure to it. The most widely used syntactic structure is the parse tree which can be generated using some parsing algorithms. These parse trees are useful in various applications like grammar checking or more importantly it plays a critical role in the semantic analysis stage. For example to answer the question “Who is the point guard for the LA Laker in the next game ?” we need to figure out its subject, objects, attributes to help us figure out that the user wants the point guard of the LA Lakers specifically for the next game.

Now the task of Syntactic parsing is quite complex due to the fact that a given sentence can have multiple parse trees which we call as ambiguities. Consider a sentence “Book that flight.” which can form multiple parse trees based on its ambiguous part of speech tags unless these ambiguities are resolved. Choosing a correct parse from the multiple possible parses is called as syntactic disambiguation. Parsing algorithms like the Cocke-Kasami-Younger (CKY), Earley algorithm or the Chart parsing algorithms uses a dynamic programming approach to deal with the ambiguity problems.
In this post, we will actually try to implement a few Syntactic parsers from different libraries:

SpaCy :

spaCy dependency parser provides token properties to navigate the generated dependency parse tree. Using the dep attribute gives the syntactic dependency relationship between the head token and its child token. The syntactic dependency scheme is used from the ClearNLP. The generated parse tree follows all the properties of a tree and each child token has only one head token although a head token can have multiple children. We can obtain the head token with the token.head property and its children by the token.children property. A subtree of a token can also be extracted using the token.subtree property. Similarly, ancestors for a token can be obtained with token.ancestors. To obtain the rightmost and leftmost token of a token’s syntactic descendants the token.right_edge and token.left_edge can be used. It is also worth mentioning that to extract the neighboring token we can use token.nbor. spaCy doesn’t provide an inbuilt tree representation although you can use the NLTK’s tree representation. Here’s a code snippet for it:

def tok_format(tok):
    return "_".join([tok.orth_, tok.tag_, tok.dep_])


def to_nltk_tree(node):
    if node.n_lefts + node.n_rights > 0:
        return Tree(tok_format(node), [to_nltk_tree(child) for child in node.children])
    else:
        return tok_format(node)


command = "Submit debug logs to project lead today at 9:00 AM"
en_doc = en_nlp(u'' + command) 

[to_nltk_tree(sent.root).pretty_print() for sent in en_doc.sents]

Here’s the output format (Token_POS Tags_Dependency Tag):- selection_014

Let’s try extracting the head word from a question to understand how dependency works. A headword in a question can be extracted using various dependency relationships. But for now, we will try to extract the Nominal Subject nsubj from the question as the headword. Here’s how you can get a subject from the sentence.

head_word = "null"
question = "What films featured the character Popeye Doyle ?"
en_doc = en_nlp(u'' + question)
for sent in en_doc.sents:
    for token in sent:
        if token.dep == nsubj and (token.pos == NOUN or token.pos == PROPN):
            head_word = token.text
        elif token.dep == attr and (token.pos == NOUN or token.pos == PROPN):
            head_word = token.text
    print(question+" ("+head_word+")")

Here we get the output with headword as “films” which is pretty close and you can improve its accuracy by detecting more dependency relationships and headword rules:

What films featured the character Popeye Doyle ? (films)

spaCy also has a dependency visualizer displaCy here is the demo with our input question:

screenshot-from-2016-12-23-13-09-29

displaCy

To install spaCy refer this Setting up Natural Language Processing Environment with Python

(Working on NLTK will update as soon as possible)

Further Reading :

The Process of Information Retrieval

Standard

A friend of mine published this realy great post about Information Retrieval. I have reblogged it here.

AMIT GUNJAL

Information Retrieval (IR) is the activity of obtaining information from large collections of Information sources in response to a need.


The working of Information Retrieval process is explained below

  • The Process of Information Retrieval starts when a user creates any query into the system through some graphical interface provided.
  • These user-defined queries are the statements of needed information. for example, queries fork by users in search engines.
  • In IR single query does not match to the right data object instead it matches with the several collections of data objects from which the most relevant document is taken into consideration for further evaluation.
  • The ranking of relevant documents is done to find out the most related document to the given query.
  • This is the key difference between the Database searching and Information Retrieval.
  • After the query is sent to the core of the system. This part has the access to the content management…

View original post 223 more words

What are some of the little things your parents did for you that made the biggest impact?

Standard

This answer was originally published by me on Quora, answering the question :

What are some of the little things your parents did for you that made the biggest impact?

And here is my response :

“When I was a kid my parents bought me these books :

World Book Encyclopedia

The World Book of Encyclopedia, I remember I was so excited the day we brought this books from a friend of my father. From that day I would daily open up each letter and just scan through it, it was so addictive. These books were like some treasure chest with information more than I could ever consume. At that time this was my Wikipedia (We didn’t have internet at that time). The pictures and illustration in it were so beautiful and informative. If anyone has these then they know the illustrations of Human Body had different transparent pages and with each page you can see different aspect/layers of your body (if you want I can upload pictures) or the Animals with detailed illustration and information. I have spent a huge deal of my childhood in these books. This greatly improved my research skills. If you had to search for a specific topic then you also had to search for the related topics.
We later bought another book :

Concise Atlas

Concise Atlas of the World from a door to door seller. It had maps of every country and I would sit for hours looking at them wondering what would actually be there at those places. It was the Google Maps for me.
I also remember my father reading us Bhagavad Gita, Gramgeeta and Manache Shlok every evening. He would read those verses and explain us each verse. It was just like a part of our curriculum. My brother and I would wait eagerly for these reading sessions and no matter how tired he was we would make him read at least half a page. He would explain them with real life stories with some funny jokes here and there. I never understood its importance at that time…we just did it because it was fun. But now I understand how it made me and my brother better citizens and gave us the ability to differentiate between right and wrong. 

I know now and appreciate how my father was foresighted.”

Read it on Quora.