Welcome!

Recurring Revenue Authors: Liz McMillan, Yeshim Deniz, Carmen Gonzalez, Elizabeth White, Pat Romanski

Related Topics: @CloudExpo, Recurring Revenue

@CloudExpo: Article

Hadoop is the Answer! What is the Question?

Don't Believe the Hype.

Disclosure: In addition to being a Sys-Con contributor, I am the VP of Marketing at 1010data, a provider of a cloud-based Big Data analytics platform that provides direct, interactive analytical access to large amounts of raw structured and semi-structured data for quantitative analysts.

So, 1010data doesn't have much actual overlap with Hadoop, which provides programmatic batch job access to linked content files for text, log and social graph data analytics.  I must confess, I have not been paying much attention to Hadoop.

But, while doing research for my upcoming presentation on Cloud-based Big Data Analytics at Cloud Expo in NYC (3:00 on Thursday), I uncovered an apocrypha in the making, a rich mythology about a yellow elephant whose name seems to have become the answer to every question about Big Data.  Got a boatload of data?  Store it in Hadoop.  Want to search and analyze that data?  Do it with Hadoop.  Want to invest in a technology company?  If it works with Hadoop, get out the checkbook and get in line.

And then, I was on a Big Data panel at the Cowen 39th Annual Technology, Media and Telecom Conference this week and several of my fellow panelists were from companies that in one way or another had something to do with Hadoop.

So, as a public service to prospective message victims of the Hadoop hype, I decided to try to figure out what Hadoop really is and what it is really good for.  No technology gets so popular so quickly unless it is good for something, and Hadoop is no exception.  But Hadoop is not the solution to every Big Data problem.  Nothing is.  Hadoop is a low-level technology that must be programmed to be useful for anything.

It is a relatively immature (V0.20.x) Apache open source project that has spawned a number of related projects and a growing number of applications and systems built on top of the crowd-sourced Hadoop code.  I have discovered that many people say "Hadoop" when they really mean Hadoop plus things that run on or with it.  For instance, "Hadoop is an analytical database" means Hadoop plus Hive plus Pig.  The ever-lengthening "Powered By" list is here.

Despite their general enthusiasm for the framework, though, many Hadoop developers also stress the difficulty of programming applications for  it, including Rick Wesel, the developer of the Cascading MapReduce library and API, who writes on his blog,

The one thing Hadoop does not help with is providing a simple means to develop real world applications. Hadoop works in terms of MapReduce jobs. But real work consists of many, if not dozens, of MapReduce jobs chained together, working in parallel and serially.

MapReduce is a patented software framework developed by Google and underlying Hadoop.  Its Wikipedia enry describes the two parts like this:

"Map" step: The master node takes the input, partitions it up into smaller sub-problems, and distributes those to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes that smaller problem, and passes the answer back to its master node.

"Reduce" step: The master node then takes the answers to all the sub-problems and combines them in some way to get the output - the answer to the problem it was originally trying to solve.

So what is Hadoop?  Straight from the elephant's mouth,

Apache Hadoop is a framework for running applications on large cluster built of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or reexecuted on any node in the cluster. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both Map/Reduce and the distributed file system are designed so that node failures are automatically handled by the framework.

Said more simply, Hadoop lets you chop up large amounts of data and processing so as to spread it out over a dedicated cluster of commodity server machines, providing high scalability, fault tolerance and efficiency in processing operations on large quantities of unstructured data (text and web content) and semi-structured data (log records, social graphs, etc.)  In as much as a computer exists to process data, Hadoop in effect turns lots of cheap little computers into one big computer that is especially good for analyzing indexed text.

By far Hadoop's most generally interesting and newsworthy triumph to date has been helping IBM's Watson supercomputer beat the best humans on Jeopardy.  That role is dissected here.

Aside from winning game shows, though, what is Hadoop good for?   Speaking of the Big Data biggie, IBM, here is Big Blue's answer to that question by way of a pithy Judith Hurwitz tweet:

 

But, Hadoop is immature - not yet at Version 1! - open source code created and edited by many different pro bono programmers, without a commercial binding of business process, coding disciplines, or direct market dynamics.  In other words, it is what it is and some of the functions that are hard or tedious to code, even if nonetheless badly needed, go wanting.  (Read tales of "zombie tasks" and other terrors from the "Dark Side of Hadoop" here.)

In any case, though, Hadoop is very versatile and many smart people and companies have found an amazing variety of uses to put it to.  And it is always fun to watch the tech world wind itself up around a new topic.  Big Data is the new black and Hadoop is the "it" elephant.

But it isn't good for everything.  See http://wiki.apache.org/hadoop/HadoopIsNot or read Ricky Ho's excellent blog post, which shows how Hadoop's design makes it a poor choice for things like fast, interactive, ad hoc analysis of large amounts of frequently updated structured (transactional) data, as for, say, all the daily trades in a busy stock exchange or large retail chain.

As Ho explains it, Hadoop spreads data out in file chunks on a number of computers and it breaks programming down into many small tasks, also spread across those machines and run in parallel as a batch job.  While a job is running, the data it is working on cannot be updated and, because the processes must communicate with each other and they are spread out across multiple networked computers, there is considerable network-related latency in the execution of the job.

Hadoop grew out of work done by both Yahoo and Google, which betrays its essence of purpose: gathering, storing and indexing, vast amounts of chunks of text and semi-structured data, understanding the relationships between those chunks, and finding them quickly when needed.  So, it is not surprising that the most impressive uses of Hadoop we have seen are in the area of analyzing so called "social data".

That's the voluminous accumulation of comments, web pages, and tweets, the identities, locations, relationships and other attributes associated with the people, sites, things and processes referenced in that data.  There is much to be learned from such data.  But when there is a lot of it, just putting it somewhere and searching and analyzing it efficiently across multiple computers and disks is difficult and Hadoop and many of its best applications are built for making that easier.

But, there are numerous products now layered on top of Hadoop that make it function as a tabular relational database and other forms of storage.  This enables customers to reuse SQL code they have already developed and to develop new query code in a language they know.  And it enables Hadoop to go pilot fish or Trojan horse on Oracle and MySQL.  But, using SQL as an access language and materializing data in unordered, joined, indexed tables does not play to Hadoop's natural strengths.

Hive is a Hadoop project that relationalizes Hadoop for data warehousing and analytics and here is what one apparently experienced crowdsourcer said about it on the Stack Overflow site.

Hive is based on Hadoop which is a batch processing system. Accordingly, this system does not and cannot promise low latencies on queries. The paradigm here is strictly of submitting jobs and being notified when the jobs are completed as opposed to real time queries. As a result it should not be compared with systems like Oracle where analysis is done on a significantly smaller amount of data but the analysis proceeds much more iteratively with the response times between iterations being less than a few minutes. For Hive queries response times for even the smallest jobs can be of the order of 5-10 minutes and for larger jobs this may even run into hours.

Cutting through the Hadoop hype, if you are looking to query, report or analyze large amounts of unstructured data or you need to build a scalable SQL data warehouse, and in either case you can tolerate the latency and batch processing is an acceptable model for your situation, Hadoop and its many adjuncts may solve your problem.

But, if you need to do interactive analytics on large amounts of raw tabular and semi-structured data, Hadoop is not what you are looking for.  If you want to do it in a managed cloud, you could look at 1010data, on dedicated hardware, check Teradata, or on commodity hardware, Vertica might be worth a look.

 

More Stories By Tim Negris

Tim Negris is SVP, Marketing & Sales at Yottamine Analytics, a pioneering Big Data machine learning software company. He occasionally authors software industry news analysis and insights on Ulitzer.com, is a 25-year technology industry veteran with expertise in software development, database, networking, social media, cloud computing, mobile apps, analytics, and other enabling technologies.

He is recognized for ability to rapidly translate complex technical information and concepts into compelling, actionable knowledge. He is also widely credited with coining the term and co-developing the concept of the “Thin Client” computing model while working for Larry Ellison in the early days of Oracle.

Tim has also held a variety of executive and consulting roles in a numerous start-ups, and several established companies, including Sybase, Oracle, HP, Dell, and IBM. He is a frequent contributor to a number of publications and sites, focusing on technologies and their applications, and has written a number of advanced software applications for social media, video streaming, and music education.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...