Can Text Analytics really help reduce manual overload? Let’s take a look back when we started sharing interesting links, bookmarks, etc. During the early 2000’ there were many “tagging sites” that emerged. Tagging helped in collating our searched links online rather than just in a browser and also helped share the saved links with a single tag. It was soon realized that the one word tagged for different people meant different things and thus context and meaning to that tag was required and with that relevant sub-tags became a part of the search criteria.
But, how does Text Analytics using specific algorithms work with such words when trying to match? Within a context, like say Contracts Documents, and within a specific field, Text Analytics can help make the work much easier and getting the machine to be able to intelligently retrieve the exact paragraph relevant to the work. But how can this work if one were to work on the world wide web?