Tag Archives: IBM Watson

IBM Watson Analytics – Powerful Analytics for Everyone

Update – Watson Analytics Beta is available now. Signup and try now @ http://www.ibm.com/analytics/watson-analytics/

On16 Sep 2014 IBM announced Watson Analytics, a breakthrough natural language-based cognitive service that can provide instant access to powerful predictive and visual analytic tools for businesses. Watson Analytics is designed to make advanced and predictive analytics easy to acquire and use.

Watson Analytics Sample 1

Watson Analytics Sample 1

Watson Analytics offers a full range of self-service analytics, including access to easy-to-use data refinement and data warehousing services that make it easier for business users to acquire and prepare data – beyond simple spreadsheets – for analysis and visualization that can be acted upon and interacted with. Watson Analytics also incorporates natural language processing so business users can ask the right questions and get results in terms familiar to their business that can be read and understood or interacted with e.g.Who are my most profitable customers and why?  or Which customers are showing signs that they might be considering defecting to a competitor? or Which sales opportunities are likely to turn into wins?  or Why are these products being returned?

I think the most amazing aspect is that Watson Analytics not only can show what is likely to happen but also what you can do about it e.g. What actions could I take that would improve my chances of closing specific deals in the pipeline? Where should we be focusing our loyalty programs?

Surely Watson Analytics is a powerful tool for any business. To make it even easier for businesses to use this technology, its being made available over cloud.  It will be hosted on SoftLayer and available through the IBM Cloud marketplace. IBM also intends to make Watson Analytics services available through IBM Bluemix to enable developers and ISVs to leverage its capabilities in their applications. Certain Watson Analytics capabilities will be available for beta test users within 30 days, and offered in a variety of freemium and premium packages starting later this year.

This is where Watson Analytics distinguishes itself from the crowded analytics marketplace. IBM makes really advanced business analytics accessible to any business user anywhere and that too without a cost! In the freemium business model a product or service is provided free to a large group of users, but a premium is charged for access to more data sources, data storage and enterprise capabilities.

Beta for the service is expected to launch September, with availability later this year. Don’t wait! Check out IBM Watson Analytics!

 

Elements of Watson – Natural Language Processing, an introduction

I would say the core component of IBM Watson is the Natural Language Processing, without which, surely Watson will not even be able to understand the question, let alone answer. NLP allows Watson to really derive the meaning of the question. How many parts are there in the question and how to interpret meaning and relation between each part.  This semantics of the question, obviously, will have direct implication on the correctness of answer for the question.

Natural Language Processing or NLP as its called, is a huge area in artificial intelligence field concerned with Human Computer interactions (HCI). I do not think of myself as knowledgeable on this subject matter but lately I did try understand its depth in respect to my interests in Watson. This blog post simply documents that study to some extent.

ELIZA written at MIT by Joseph Weizenbaum around 1964-66, named after the protagonist in George Bernard Shaw’s Pygmalion, is one of the earliest examples of NLP. ELIZA is in some sorts great grandmother of Apple’s Siri J

The first thing to know to begin with NLP will be the linguistic concepts. Grammar is at the core of any language. Grammar of a language is how sentences are constructed. It is interesting to note history of grammar that, the first systematic grammars originated in Iron Age India, with Yaska (6th century BC), Pāṇini (4th century BC) and his commentators Pingala (c. 200 BC), Katyayana, and Patanjali (2nd century BC). Tolkāppiyam is the earliest Tamil grammar is mostly dated to before the 5th century AD.

In NLP, multiple text processing components are used in sort of a pipeline of tasks performed, in order to provide value from text. Text processing components are tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and co-reference resolution.

Tokenization is breaking up a stream of text into smaller meaningful parts called tokens. Tokens are usually words, phrases or sentences demarcated by punctuation. In English, words are mostly separated from each other by blanks or white space as they are called in IT world. Thing to remember thought is that not all white space is equal e.g. “San Francisco” or “fast food” is supposed to taken as a single word. On other hand “I’m” should be two words “I am”. Further challenges to consider are abbreviations, acronyms, hyphenated words and most importantly numerical and special expressions such as telephone numbers, date format, vehicle license number.

Sentence segmentation is simply put dividing the stream of text into sentences. Let us not jump into conclusion that its basically looking for a period or other end of sentence punctuation. Remember, period is used in many more context and not only end of sentence. A period may denote an abbreviation, decimal point, or an email address. Question marks and exclamation marks may appear in embedded quotations, emoticons, computer code, and slang. Thus, we need specialized algorithms to really identify the end of sentence. In fact, there is special name for such algorithms. Its called Sentence Boundary Detection. I read this very interesting paper by Dan Gillick of UC-Berkley on this. https://code.google.com/p/splitta/ He claims this includes proper tokenization and models for very high accuracy sentence boundary detection (English only for now). The models are trained from Wall Street Journal news combined with the Brown Corpus which is intended to be widely representative of written English. Error rates on test news data are near 0.25%.

Another very interesting component of text processing is part of speech tagging. The basics of a language are the words. Linguist classify the words into various classes or ‘parts of speech’ (POS). In English language, they noun, verb, article, adjective, preposition, pronoun, adverb, conjunction, and interjection. The process of assigning one of the parts of speech to the given word, rather token, is called Parts Of Speech tagging. POS Tagger is an algorithm or program which does POS tagging to tokens. POS Tagger marks tokens with their corresponding word type based on the token itself and the context of the token. The very important part of context of the token. Wiki has a very interesting example in sentence “The sailor dogs the hatch.”. “dogs” is usually thought of as just a plural noun, in this context is a verb. There are variety of algorithms which are used to do POS tagging. Some of them are Viterbi algorithm, Brill Tagger, Constraint Grammar, Baum-Welch algorithm. Methods like Hidden Markov Models or visible Markov model have been used. Many machine learning methods have also been applied to the problem of POS tagging e.g. SVM, Maximum entropy classifier, maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), Perceptron, and Nearest-neighbor.

Further classification by labeling sequences of words in text processing as names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. is named entity extraction. There are many approaches, linguistic grammar-based techniques as well as statistical models, for Named entity extraction or Named entity recognition (NER), as it is some times called. Conditional random fields (CRFs) are a class of statistical modeling method often applied for NER.

Parsing is the task of analyzing the grammatical structure of natural language. Parser finds which groups of words go together (as “phrases”) and which words are the subject or object of a verb. Parser generates a parse tree of a given sentence. Parse trees are something like the “sentence diagram” we learnt when we were learning English grammar in school. Parse trees can be Constituency-based or Dependency-based. A constituent is a word or a group of words that functions as a single unit within a hierarchical structure in a phrase structure grammar; on other hand, the dependency relation views the (finite) verb as the structural center of all clause structure. All other syntactic units (e.g. words) are either directly or indirectly dependent on the verb.

Chunking is also called shallow parsing and it’s basically the identification of parts of speech and short phrases e.g. noun phrase or verb phrase. Full parsing is expensive, Partial or shallow parsing can be much faster, may be sufficient for many applications and can also serve as a possible first step for full parsing.

Last but not the least, we discuss co-reference resolution. Co-reference resolution is the task of finding all expressions that refer to the same entity in a discourse. E.g. ‘Sheila said she fell down’. Here She is referring to Sheila is the co-reference resolution. There are many types of co-references, like anaphora, cataphora, split antecedents, co-referring noun phrases, etc. Anaphora is when proform follows the expression to which it refers e.g. ‘Sheila said she fell down’ She follows Sheila, to whom it refers to. Cataphora is when proform precedes the expression to which it refers e.g. e.g. ‘she fell down, Sheila said’, She precedes Sheila, to whom it refers to. Algorithms intended to resolve co-references commonly look first for the nearest preceding individual that is compatible with the referring expression. Some are deterministic or multi-pass sieve algorithms.

So these were some of the very basic components of Natural language processing which forms the core of IBM Watson.

Creating Cognitive Apps Powered By IBM Watson

There has been a lot of buzz around IBM Watson ever since, it defeated Brad Rutter and Ken Jennings in the Jeopardy! Challenge of February 2011. Recently again there has been much activity surrounding IBM Watson, when IBM did a press release stating that its forming a new Watson group to meet growing demand of cognitive innovations. In addition to launching the IBM Watson Group, IBM also announced three new services based on Watson’s cognitive intelligence, IBM Watson Discovery Advisor, IBM Watson Analytics & IBM Watson Explorer.

In this blog post, I want to share how one can create cognitive applications using IBM Watson. To start with, we need to first understand what is cognitive computing. Originally referred to as Artificial Intelligence, the term ‘Cognitive computing’ is not just a newer term but also an re-engineering to make the computer systems model like a human brain. The key difference is that artifical intelligent system lack ability to learn from its experience. I like what I read somewhere tha, Cognitive computing aims to reverse-Engineer the mind! Cognitive computing systems learn and interact naturally with people to extend what either humans or machine could do on their own. These are machines of tomorrow and will completely change the way people interact with computing systems. IBM Watson represents a first step into cognitive systems. Following three form the core to IBM Watson –
1. Natural language processing by helping to understand the complexities of unstructured data
2. Hypothesis generation and evaluation by applying advanced analytics to weigh and evaluate a panel of responses based on only relevant evidence
3. Dynamic learning by helping to improve learning based on outcomes to get smarter with each iteration and interaction

This redguide will help you understand insides of IBM Watson and how it works.

Companies can embed Watson’s cognitive capabilities in their application without the need for building deep natural language, machine learning and ranking algorithms or other core technology skills. They could accomplish this by being able to embed the Watson platform capabilities as a service, through the use of API and tools on the Watson Developer Cloud.

Application Programming Interface (API) : The Watson Question and Answer API (QAAPI) is a Representational State Transfer (REST) service interface that allows applications to interact with Watson. Using this API, one can pose questions to Watson, retrieve responses, and submit feedback on those responses. In addition to simple question and responses, Watson can provide transparency into how it reached its conclusions through the REST services. There are other functions like ingesting content in the Watson platform that will also be exposed as tools and APIs and can be accessed from within an application.

Watson supports two ways of using the QAAPI – Asynchronous and synchronous mode. In the asynchronous mode, the question is posted to Watson and the response returns a link to retrieve the answer when it is ready. In the synchronous mode, a POST operation sends the question to Watson. The answer is received synchronously. The output from Watson contains not just the response, but it also contains the confidence score, the evidence and other metadata about the response. If there is an error condition with the pipeline, additional information is returned in the “errorNotifications” and is helpful in working with IBM Support.

The key steps involved in using QAAPI are:
1. Configure parameters, including authentication : First the user configures various question parameters on the question, including the Basic Authentication, Content type, Accept type, and for a request in a synchronousmode, the sync-timeout, etc. The QAAPI uses Basic Authentication over SSL to provide security. The QAAPI supports JSON as the content-type and accept type. XML is also supported.

2. Post question : The question needs to be in a JSON format. The output can be customized by passing in the number of responses to return with the question itself, along with additional insights into how Watson handled the question.

3. Receive response : The output from Watson contains not just the response, but it also contains the confidence score, the evidence and other metadata about the response. Some of this information provides insight into how the question was interpreted by Watson. If there is an error condition with the pipeline, additional information is returned in the “errorNotifications” and is helpful in working with IBM Support.

4. Process response : The response can be shown to a user asking the question as-is, or it could passed on to another component for additional processing before it is displayed to a user. It could be used to visually represent the response(s), create a way to ask additional simulated questions or process the answer to obtain additional insights on the interactions. After Watson provides the response, the application developer is responsible to integrate that response seamlessly into their application.

In this manner by creating applications to embed Watson’s cognitive skills, Watson can enable businesses and consumers to have a dynamic exchange, leading to insights and new, synergistic ways of thinking. This can enable companies to
• Gain insights from a hitherto largely untapped sources of unstructured content
• Converse with clients or users in a way that humans would – using natural language
• Enhance the knowledge in the company’s domain with a system that learns over time.

Reference : Whitepaper: Creating Cognitive Apps Powered By IBM Watson – Getting Started with the API