Welcome!

Recurring Revenue Authors: Elizabeth White, Pat Romanski, Yeshim Deniz, Liz McMillan, Xenia von Wedel

Related Topics: @DevOpsSummit, Microservices Expo, Recurring Revenue, Cloud Security

@DevOpsSummit: Blog Post

A New Chapter in Log Management By @TrevParsons | @DevOpsSummit [#DevOps]

Organizations use logs for debugging during development, for monitoring and troubleshooting production systems

Unlimited Logging: A New Chapter in Log Management

It's no secret that log data is quickly becoming one of the most valuable sources of information within organizations. There are open source, on-premise, and cloud-based solutions to help you glean value from your logs in many different ways.

Largely, organizations use logs for debugging during development, for monitoring and troubleshooting production systems, for security audit trails and forensics, and (more and more) for different business use cases that transcend product management and marketing teams.

I love seeing logs used in untraditional ways, for example:

  • How Elon Musk used logs to call out a New York Times journalist after an unfavorable review of the performance of the (at the time) new Model S Telsa. Musk went back to the logs to outline ‘exactly' what happened during the test drive vs. what was claimed, highlighting the value of maintaining the log level evidence of your systems just in case you ever need it.
  • Monitoring your users in real time. Using javascript logging you can log directly from the client's browser as they navigate your app - giving you insight into your customers' behavior. In the past this was achieved by observing activity on the store room floor, where you could see how customers were congregating, and determine what items were popular. Today, where your customers are always online, you can gather the same customer insights by logging such activity and viewing this in a ‘live streaming' mode. Because logs allow you to record this information, you can obtain a much more analytical understanding of customer trends and even individual customer behavior; enabling you to better position your offering and drive more value for your business. For a product manager or a founder of a SaaS company like myself, it can be addictive to sit and watch your users in real time, as they use new features and interact with your technology.
  • Logs used as simple data structures to build powerful distributed systems. Jay Kreps' article is a must read for every developer interested in understanding the power of the humble log as a simple data structure for solving complex problems.

Logs continue to be one of the fastest growing data sources at organizations today. For example the largest DB hosted at AWS contains machine-generated statistics on AWS itself.

Log Management's Ugly Secret
Organizations manage logs similar to how we managed email in the 1990's.

Organizations are constantly worried about data volumes, exceeding data limits, and incurring unpredictable (and costly) fees. You end up always looking over your shoulder, concerned about your next log management bill, which is almost always based on GB/TB of data you produce.

The constant murmur we hear from organizations is something along the lines of - ‘look don't get me wrong, we love our logs and would find it very difficult to operate our business without them.... BUT it's bloody expensive!'

This cost largely comes in two flavors:

  • Costs associated with the traditional vendors' per GB, pay for everything pricing model can become prohibitive as log volumes increase.
  • Organizations frustrated with this model who look at open source/roll your own solutions often end up in an even more expensive situation. They are left footing the bill for the infrastructure required to run their internal logging cluster, as well as the developer(s) salary required to build and consistently maintain the solution.

Enter Unlimited Logging - Logentries is to logs as Gmail was to Email

unlimited log management

At Logentries we're moving away from charging organizations per GB for everything they log, and instead, want you to send us ALL your log data and not worry about the cost.

Think about how you felt when Gmail came along and you never had to worry about running out of inbox space - they opened a new chapter in how email as a service was delivered; most certainly for the better. At Logentries we are doing the same for our users with our new Unlimited Logging - send us all your data and don't worry about it.

You do not necessarily get 2X the value from your logs when your log volumes double.

Value is more aligned with the type of analysis you can perform and the valuable trends you can extract from your data.

How Unlimited Logging Works
At Logentries we have a fundamentally different perspective:

Log Management and analysis should be simple to use and real time:

  • You should not need to be a data scientist to work with and understand your logs.
  • You should not have to learn a complex search query language to navigate and get value from your logs.
  • Analysis should be performed in real time and you shouldn't have to wait 10 mins to get an alert on an important event that occurred in your system.

At Logentries we have built a technology from more than a decade of research in distributed systems, with a unique pre-processing engine that analyses your data up front, in real time with built-in intelligence, so that you do not need to construct complex search queries. We do the hard work so you don't have to, and we aim to make your log data analysis quick, painless but still super powerful.

Send as much data as you like:

Our unique pre-processing engine can be used to dynamically route your logs for real time analysis, or alternatively, into cloud storage for on-demand analytics. Generally, an organization will have log data that needs to be analyzed immediately, in real time. But organizations also tend to have a lot of data that MAY need to be analyzed at some point in the future - this is where on-demand analytics comes into play.

Traditionally, logging providers have tried to apply a one-size fits all approach.

All your data gets indexed up front- so that they can charge you more per GB for all the log data indexed. At Logentries we let YOU decide what data you want to analyze right now, and what data you want to analyze at some point in the future - on demand.

We allow you to send as much data as you like to cloud storage, and we only charge you for what you actually ingest into the Logentries service for analysis. This provides a very flexible way for organization to significantly reduce and cap their logging costs without having to worry about log ‘inflation' as their systems and business grow - as they invariable do.

Want to check out our unlimited logging? You can get more details on how it works here and how you can start to better manage and cut your logging costs.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...