Welcome!

Recurring Revenue Authors: Zakia Bouachraoui, Yeshim Deniz, Carmen Gonzalez, Liz McMillan, Pat Romanski

Related Topics: Microservices Expo, Recurring Revenue

Microservices Expo: Article

Economical Data Warehousing Using Amazon Web Services and Hadoop

How to transfer data between an Oracle database and Hadoop using Sqoop

Sqoop makes it very easy to transfer data between Oracle and Hadoop using a single command. The reason why we would want to import data from an Oracle database into Hadoop/Hive is that we might want to join Hive tables with Oracle lookup tables, or other data residing in Oracle database.

Data originating from an Oracle database can help better understand and analyze raw, more granular data contained in Hive/HDFS.

Sqoop uses JDBC driver to connect to an Oracle database. If you have a table results in your Oracle database and want data from it to be imported to Hadoop HDFS ( Hadoop Distributed File System ) for further processing by Hive you only need to issue a single command:

./sqoop import --connect jdbc:oracle:thin:@ec2-23-21-167-145.compute-1.amazonaws.com:1521:TEST --username system -P --table results --columns "owner" -m 1

This command will connect to the Oracle database TEST residing on Amazon Web Services server ec2-23-21-167-145.compute-1.amazonaws.com, as user system and import column owner from table results into HDFS.

It is now easy to load this table into Hive for further processing using HiveQL language. HiveQL is capable of SQL-like data processing while transparenlty utilizing MapReduce paradigm ( there is no need to write MapReduce programs ):

hive>LOAD DATA  INPATH '/usr/lib/hadoop-0.20/sqoop-1.3.0/bin/results/part-m-00000' OVERWRITE INTO TABLE results;
Loading data to table default.results
Deleted file:/user/hive/warehouse/results
OK
Time taken: 0.166 seconds

We can now issue various HiveQL commands to query and further process this data:

hive> select * from results limit 10;
select * from results limit 10;
OK
SYS
SYS
SYS
SYS
SYS
SYS
SYS
SYS
SYS
SYS

Time taken: 0.219 seconds

Once data warehousing analytics is completed in Amazon AWS Hadoop  it is often convenient to upload aggregate data ( results ) to relational database like Oracle for further data processing or visualization.

Let's say that result of our data analysis is contained in the file target.txt, residing in HDFS.

Following command will export file target.txt from HDFS into Oracle database ORCL, residing on the server  ec2-23-21-178102.compute1.amazonaws.com, connecting as user system. Data will be exported to the Oracle database table HADOOP_SOURCE.

$ sqoop export --connect jdbc:oracle:thin:@ec2-23-21-178-102.compute-1.amazonaws.com:1521:ORCL --username SYSTEM --table HADOOP_SOURCE  --export-dir /usr/lib/hadoop-0.20/input/target.txt -P

If your Hadoop cluser resides on Amazon Web Services it is very easy to add more processing power - Hadoop DataNodes. It is also possible to load extremely large volume of data ( petabytes ) using AWS Export/Import service. You can also upload data to AWS S3 service, or straight to AWS EC2 EBS volumes that can be attached to Hadoop DataNodes of your choice.

More Stories By Ranko Mosic

Ranko Mosic, BScEng, is specializing in Big Data/Data Architecture consulting services ( database/data architecture, machine learning ). His clients are in finance, retail, telecommunications industries. Ranko is welcoming inquiries about his availability for consulting engagements and can be reached at 408-757-0053 or [email protected]

IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...