Welcome!

Recurring Revenue Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Xenia von Wedel

Blog Feed Post

CRASH Webinar: Code Quality Q & A Discussion

We just finished up the 30-minute webinar where Dr. Bill Curtis, our Chief Scientist, described some of the findings that are about to be published by CAST Research Labs. The CRASH (CAST Research on Application Software Health) report for 2014 is chock full of new data on software risk, code quality and technical debt. We expect the initial CRASH report to be produced in the next month, and based on some of the inquiries we’ve received so far, we will probably see a number of smaller follow-up studies come out of the 2014 CRASH data. This year’s CRASH data that we saw Bill present is based on 1316 applications, comprising 706 million lines of code – a pretty large subset of the overall Appmarq repository.  This means the average application in the sample was 536 KLOC. We’re talking big data for BIG apps here. This is by far the biggest repository of enterprise IT code quality and technical debt research data. Some of the findings presented included correlations between the health factors – we learned that Performance Efficiency is pretty uncorrelated to other health factors and that Security is highly correlated to software Robustness. We also saw how the health factor scores were distributed across the sample set and the differences in structural code quality by outsourcing, offshoring, Agile and CMMI level. There were many questions, so we went 15 minutes over the 30 minute timeslot to address all of them. The main point of this post is to document some of the more important questions and my summary of the answers provided by Bill Curtis, especially for those of you who could not stay on past the half hour. So, here goes: 1. If an application is better in one health factor, e.g., robustness , does it tend to also have better scores in the other areas (security, performance )? We have a lot of data on this, some of which was shown in the webinar. Overall, what we’re finding is that most of the health factors are correlated – especially Robustness and Security. The only exception to that is Performance Efficiency, which has very low correlation to any of the other health factors. 2. Does the age or maturity of an application have correlation to its size? Looking at the demographic age data we have, it seems that older applications are indeed larger. Certainly the COBOL applications tend to be larger than the average. But, we did not do a statistical analysis of this particular data correlation. 3. Do you track if an application is based on a commercial package (COTS) or completely from scratch? We do, of course, for SAP-based and Oracle-based package customizations, for which we have a large dataset. For most others, it’s hard to determine whether the custom application was at some point based on a COTS package. In some cases, even the staff in that IT organization may not know this. 4. Do you have findings by industry? Or any details on technical debt? Or, by technology? We will have findings by industry that we will be producing in subsequent CRASH report research. We are revamping our Technical Debt analysis to make it more sophisticated and that will be published in a separate CRASH research report later this year. The technology analysis is difficult, because so many software engineering rules that we’re looking at are technology specific. We decided to leave that out of this CRASH report and we will be evaluating how to accurately report that data later on in the year. 5. Does Language have a significant impact on quality or development cost? There are many things to consider when thinking about impact of language. In general, COBOL developers have tended to be more disciplined and trained than Java developers, but that may change over time. When looking at C or C++, there are more opportunities to make technical mistakes in memory management and pointer arithmetic. But overall, it’s hard to make generalizations industry wide. Each specific organization has stronger skillsets in certain technologies. We recommend analyzing the code quality in your organization to draw conclusions for your own technology roadmap or training regimen for developers. 6. For J-EE applications are you able to differentiate the quality & productivity for applications developed using industry frameworks versus earlier generation applications built ground-up? Yes we can. The Appmarq repository contains data on applications with a number of enterprise Java frameworks, including Struts, Spring, Hibernate and others. We did not focus on this specific question in this year’s CRASH research, but we did publish a specific report about frameworks last year based on the last dataset pulled from Appmarq. You can take a look at it here – the findings might surprise you. 7. Do you have statistics on onshore vs offshore delivery by function points? Not yet. We plan on collecting this data over the next several years and to analyze these statistics in subsequent CRASH reports. 8. What are the current practices to leverage CAST automated function points to measure productivity of global project teams? Are there mature models to follow? This question is not specific to the CRASH report results, but it’s an important question that we hear with increasing frequency from IT practitioners. Large scale adoption of function points for productivity measurement is a rich, detailed topic that deserves significant treatment. Bill Curtis has held full-day seminars to train on this topic several times over the last couple years and is working on a book. Some of this material has been summarized in a whitepaper-style research report that you can find here. For more information, please contact us and we can set up more in-depth workshops for your organization. 9. If we look back to the previous CRASH reports, what do we learn when comparing to the current data? This is a tough question to answer, as we don’t do trending analysis at this point. The difference between this report and the last report is that our dataset is about double. Some of our past findings are being confirmed, where we saw the data trend but did not want to report conclusions because we did not have statistical significance. Now we have enough data points to make statements, like our conclusion about differences in shoring. We cannot, however, say that quality in the industry is getting better or getting worse as a whole. We’ll need many more years of data to make coherent conclusions on that scale.

Read the original blog entry...

More Stories By Lev Lesokhin

Lev Lesokhin is responsible for CAST's market development, strategy, thought leadership and product marketing worldwide. He has a passion for making customers successful, building the ecosystem, and advancing the state of the art in business technology. Lev comes to CAST from SAP, where he was Director, Global SME Marketing. Prior to SAP, Lev was at the Corporate Executive Board as one of the leaders of the Applications Executive Council, where he worked with the heads of applications organizations at Fortune 1000 companies to identify best management practices.

IoT & Smart Cities Stories
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
"Avere Systems deals with data performance optimization in the cloud or on-premise. Even to this day many organizations struggle with what we call the problem of data gravity - 'Where should I put the data?' - because the data dictates ultimately where the jobs are going to run," explained Scott Jeschonek, Director Cloud Solutions at Avere Systems, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.