Welcome!

Recurring Revenue Authors: Elizabeth White, Yeshim Deniz, Xenia von Wedel, Liz McMillan, Carmen Gonzalez

Blog Feed Post

Mainframes are dead, Right?

Funny thing about the hype cycle in high tech, things rarely turn out the way cheerleaders proclaim it will. Mainframes did not magically disappear in any of the waves that predicted their demise. The reason is simple – there is a lot of code running on mainframes that works, and has worked well for a long time, rewriting all of that code would be a monumental undertaking that, even today, twenty years after the first predictions of its demise, many organizations – particularly in financials - are not undertaking.

Don’t get me wrong: There are a variety of reasons why mainframes in their current incarnation are doomed to a small vertical market at best in the very very long run, but the cost of recreating systems just to remove mainframes is going to continue to hold them in a lot of datacenters in the near future.

But they do need to be able to communicate with newer systems if they’re going to hang around, and the last five years or so have seen a whole lot of projects to make them play more friendly with the distributed datacenter.

While many of these projects have come off without a hitch, I saw an interesting case in financial services the other day involving a mainframe and a slow communications channel. Thought it was a slick solution, and thought I’d write it up in case any of you all run into similar problems.

This company had, over time, become very distributed geographically, but still had some systems running on their mainframe back in corporate HQ. The systems needed to communicate back to the mainframe, but some of them were on horrible networks that would sometimes suffer latency and line quality issues, yet the requirement to run apps on the mainframe persisted. The mainframe has limited I/O capabilities without expensive upgrades that the organization would like to avoid, so they are considering alternate solutions to resolve the problem of one branch tying up resources with retransmits and long latency lags while the others back-filled the queue.

It’s more complex than that, but I’m keeping it simple for this blog, in hopes that if it applies to you directly, you’ll be better able to adapt the scenario to your situation. No highly distributed architecture with mainframe interconnects is simple, and they’re rarely exactly the same as another installation that fits the same description, but this will (hopefully) give you ideas.

MF.Without.F5

This is the source problem – when site 1 (or site N) had connection problems, it locks up some of the mainframe’s I/O resources, slowing everything down. If multiple sites have problems with communications links, it could slow the entire “network” of sites communicating with the mainframe as things backed up and make even good connections come out slow.

In the following slide, of course I’m using F5 as an example – mainly because it is the solution I know to be tested. If you use a different ADC vendor than F5, call your sales or support reps and ask them if you could do this with their product. Of course you won’t get all the excellent other features of the market-leading ADC, but you’ll solve this problem, which is the point of this blog.

The financial services organization in question could simply place BIG-IP devices at the connecting points where the systems entered the Internet. This gives the ability to configure the F5 devices to terminate the connections, and buffer responses. The result is that the mainframe only holds open a connection long enough for it to transfer over the LAN, and eliminate the latency and retransmit problems posed by the poor incoming connections.

Again, it is never that simple, these are highly complex systems, but it should give you some ideas if you run into similar issues.

Here is what the above diagram would look like with the solution pieces in place.

MF.With.F5

Note that the F5 boxes in the branches would not be strictly necessary within the confines of the problem statement – you only care about alleviating problems on the mainframe side – but offer the ability to do some bi-directional optimizations that can improve communications between the sites.  It also opens the possibility of an encrypted tunnel in the future if needed, which in the case of financial services, is highly attractive.

I thought it was cool that someone was thinking about it as a network problem with mainframe symptoms rather than the other way around, and it is a relatively simple fix to implement. Mainframes aren’t going away, but we’ll see more of this as they’re pushed harder and harder and put behind more and more applications. Inventive solutions to this type of problem will become more and more common. Which is pretty cool.

 

Related Articles and Blogs:

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...