Thursday, April 20, 2017

Cautiously optimistic on blockchain at MIT

Blockchain has certain similarities to a number of other emerging technologies like IoT and cloud-native broadly. There’s a lot of hype and there’s conflation of different facets or use cases that aren’t necessarily all that related to each other. I won’t say that MIT Technology Review’s Business of Blockchain event at the Media Lab on April 18 avoided those traps entirely. But overall it did far better than average in providing a lucid and balanced perspective. In this post, I share some of the more interesting themes, discussion points, and statements from the day.

It’s very early

Joi Ito, MIT Media Lab

Joi Ito, the Director of the MIT Media Lab, captured what was probably the best description of the overall sentiment about blockchain adoption when he said that we "should have a cautious but optimistic view.” He went on to say that “it's a long game” and that we should also "be prepared for quite of bit of change.” 

In spite of this, he observed that there was a huge amount of investment going on. Asked why, he essentially shrugged and suggested that it was like the Internet boom where VCs and others felt they had to be part of the gold rush.  “It’s about the money." He summed up by saying "we're investing like it's 1998 but it's more like 1989."

The role of standards

In Ito’s view standards will play an important role and open standards are one of the things that we should pay attention to. However, Ito also drew further on the analogues between blockchain and the Internet when he went on to say that "where we standardize isn't necessarily a foregone conclusion” and once you lock in on a layer (such as IP in the case of the Internet), it’s harder to innovate in that space. 

As an example of the ongoing architectural discussion, he noted that there are "huge arguments if contracts should be a separate layer” yet we "can't really be interoperable until agree on what goes in which layer."

Use cases

Most of the discussion revolved around payment systems and, to a somewhat lesser degree, supply chain (e.g. provenance tracking).

In addition to cryptocurrencies (with greater or lesser degrees of anonymity), payment systems also encompass using blockchains to reduce the cost of intermediaries or eliminating them entirely. This could in principle better enable micropayment or payment systems for individuals who are currently unbanked. Robleh Ali, a research scientist in MIT’s Digital Currency Initiative notes that there’s “very little competition in the financial sector. It’s hard to enter for regulatory and other reasons." In his opinion, even if blockchain-based payment systems didn’t eliminate the role of banks, moving money outside the financial system would put pressure on them to reduce fees.

A couple of other well-worn blockchain examples involve supply chains. Everledger uses blockchain to track features such as diamond cut and quality, as well as monitoring diamonds from war zones. Another recent example comes from IBM and Maersk who say that they are using blockchain to "manage transactions among network of shippers, freight forwarders, ocean carriers, ports and customs authorities.” 

(IBM has been very involved with the Hyperledger Project, which my employer Red Hat is also a member of. For more background on Hyperledger, check out my podcast and discussion with Brian Behlendorf—who also spoke at this event—from a couple months back.)

It’s at least plausible that supply chain could be a good fit for blockchain. There’s a lot of interest in better tracking assets as they flow through a web of disconnected entities. And it’s an area that doesn’t have much in the way of well-established governing entities or standardized practices and systems. 

Amber Baldet, JP Morgan

Identity

This topic kept coming up in various forms. Amber Baldet of JP Morgan went so far as to say “If we get identity wrong, it will undermine everything else. Who owns our identity? You or the government? How do you transfer identity?"

In a lunchtime discussion Michael Casey of MIT noted that “knowing that we can trust whoever is going to transact is going to be a fundamental question.” But he went on to ask “how do we bring back in privacy given that with big data we can start to connect, say, bitcoin identities."

The other big identity tradeoff familiar to anyone who deals with security was also front and center. Namely, how do we balance ease-of-use and security/anonymity/privacy? In the  words of one speaker “the harsh tradeoff between making it easy and making it self-sovereign."

Chris Ferris of IBM asked “how do you secure and protect private keys? Maybe there’s some third-party custodian but then you're getting back to the idea of trusted third parties. Regulatory regimes and governments will have to figure out how to accommodate anonymity."

Tradeoffs and the real world

Which is as good a point as any to connect blockchain to the world that we live in.

As Dan Elitzer, IDEO coLAB, commented "if we move to a system where the easiest thing is to do things completely anonymously, regulators and law enforcement will lose the ability to track financial transactions and they'll turn to other methods like mass surveillance.” Furthermore, many of the problems that exist with title registries, provenance tracking, the unbanked poor, etc. etc. aren’t clearly the result of technology failure. Given the will and the money to address them in a systematic way that avoids corruption, monopolistic behaviors, and legal/regulatory disputes, there’s a lot that could be done in the absence of blockchains.

To take one fairly simple example that I was discussing with a colleague at the event, a lot of the information associated with deeds and titles in the US isn’t stored in the dusty file cabinets of county clerks because we lack the technology to digitize and centralize. They’re there for some combination of inertia, lack of a compelling need to do things differently, and perhaps a generalized fear of centralizing data. In other situations, “inefficiencies” (perhaps involving bribes) and lack of transparency are even more likely to be seen as features and not bugs by at least some of the participants.  Furthermore, just because something is entered into an immutable blockchain doesn’t mean it’s true.

Summing up

A few speakers alluded to how bitcoin has served as something of an existence proof for the blockchain concept. As Neha Narula, Director of Research of DCI at the MIT Media Lab, put it, bitcoin has "been out there for eight years and it hasn't been cracked” even though “novel cryptographic protocols are usually fragile and hard to get right."

At the same time, there’s a lot of work still required around issues like scalability, identity, how to govern consensus, and adjudicating differences between code and the spec. (If the code is “supposed” to do one thing and it actually does another, which one governs?) And there are broader questions. Some I’ve covered above. There are also fundamental questions like: Are permissioned and permission-less (i.e. public) blockchains really different or are they variations of the same thing? What are the escape hatches for smart contracts in the event of the inevitable bugs? What alternatives are there to proof of work? Where does monetary policy and cryptocurrency intersect?

I come back to Joi Ito’s cautious but optimistic.

-----

Photos: 

Top: Joi Ito, Director MIT Media Lab

Bottom: Amber Baldet, Executive Director, Blockchain Program Lead, J.P. Morgan

by Gordon Haff

Wednesday, April 19, 2017

DevOps Culture: continuous improvement for Digital Transformation

Marshmallow winners

In contrast to even tightly-run enterprise software practices, the speed at which big Internet businesses such as Amazon and Netflix can enhance, update, and tune their customer-facing services can be eye opening. Yet a miniscule number of these deployments cause any kind of outage. These companies are different from more traditional businesses in many ways. Nonetheless they set benchmarks for what is possible. 

Enterprise IT organizations must do likewise if they’re to rapidly create and iterate on the new types of digital services needed to succeed in the marketplace today. Customers demand anywhere/anywhen self-service transactions and winning businesses meet those demands better than their competition. Operational decisions within organizations also must increasingly be informed by data and analytics, requiring another whole set of applications and data sets.

Amazon and Netflix got to where they are using DevOps. DevOps touches many different aspects of the software development, delivery, and operations process. But, at a high level, it can be thought of as applying open source principles and practices to automation, platform design, and culture. The goal is to make the overall process associated with software faster, more flexible, and incremental. Ideas like the continuous improvement based on metrics and data that have transformed manufacturing in many industries are at the heart of the DevOps concept.

Development tools and other technologies are certainly part of DevOps. 

Pervasive and consistent automation is often used as a way to jumpstart DevOps in an organization. Playbooks that encode complex multi-part tasks improve both speed and consistency. It can also improve security by reducing the number of error-prone manual processes. Even narrowly targeted uses of automation are a highly effective way for organizations to gain immediate value from DevOps.

Modern application platforms, such as those based on containers, can also enable more modular software architectures and provide a flexible foundation for implementing DevOps. At the organizational level, a container platform allows for appropriate ownership of the technology stack and processes, reducing hand-offs and the costly change coordination that comes with them. 

However, even with the best tools and platforms in place, DevOps initiatives will fail unless an organization develops the right kind of culture. One of the key transformational elements is developing trust among developers, operations, IT management, and business owners through openness and accountability. In addition to being a source of innovative tooling, open source serves as a great model for the iterative development, open collaboration, and transparent communities that DevOps requires to succeed.

Ultimately, DevOps becomes most effective when its principles pervade an organization rather than being limited to developer and IT operations roles. This includes putting the incentives in place to encourage experimentation and (fast) failure, transparency in decision-making, and reward systems that encourage trust and cooperation. The rich communication flows that characterize many distributed open source projects are likewise important to both DevOps initiatives and modern organizations more broadly.

Shifting culture is always challenging and often needs to be an evolution. For example, Target CIO Mike McNamara noted in a recent interview that “What you come up against is: ‘My area can’t be agile because…’ It’s a natural resistance to change – and in some mission-critical areas, the concerns are warranted. So in those areas, we started developing releases in an agile manner but still released in a controlled environment. As teams got more comfortable with the process and the tools that support continuous integration and continuous deployment, they just naturally started becoming more and more agile.”

At the same time, there’s an increasingly widespread recognition that IT must respond to the needs of and partner with the lines of business--and that DevOps is an integral part of that redefined IT role. As Robert Reeves, the CTO of Datical, puts it: “With DevOps, we now have proof that IT can and does impact market capitalization of the company. We should staff accordingly.”

------------------

Photo credit: http://marshmallowchallenge.com/Welcome.html

Monday, April 17, 2017

DevSecOps at Red Hat Summit 2017

Screen Shot 2017 04 17 at 11 51 08 AM

We’re starting to hear “DevSecOps" mentioned a lot. The term causes some DevOps purists to roll their eyes and insist that security has always been part of DevOps. If you press hard enough, they may even pull out a well-thumbed copy of The Phoenix Project by Gene Kim et al. [1] and point to the many passages which discuss making security part of the process from the beginning rather than a big barrier at the end.

But the reality is that security is often something apart from DevOps even today. Even if DevOps should include continuously integrating and automating security at scale. It’s at least in part because security and compliance operated largely in their own world historically. At a DevOpsDays event last year, one senior security professional even told me that this was the first IT event that was not security-specific that he had ever attended.

With that context, I’d like to point you to a session that my colleague William Henry and I will be giving at Red Hat Summit on May 3. In DevSecOps the open source way we’ll discuss how the IT environment has changed across both development and operations. Think characteristics and technologies like microservices, component reuse, automation, pervasive access, immutability, flexible deploys, rapid tech churn, software-defined everything, a much faster pace, and containers.

Risk has to be managed across all of these. (Which is also a change. Historically, we tended to talk in terms of eliminating risk while today it’s more about managing risk in a business context.)

Doing so requires securing the software assets that get built and well as the machinery doing the building. It requires securing the development process from the source code through the rest of the software supply chain. It requires securing deployments and ongoing operations continuously and not just at a point in time. And it requires securing both the application and the container platform APIs.

We hope to see you at our talk. But whether or not you can make it to see us specifically, we hope that you can make it to Red Hat Summit in Boston from May 2-4. I’m also going to put in a plug for the OpenShift Commons Gathering on the day before (Monday, May 1).

-------------- 

[1] If you’re reading this, you’ve almost certainly heard of The Phoenix Project. But, if not, it’s a fable of sorts about making IT more flexible, effective, and agile. It’s widely cited as one of the source texts for the DevOps movement.

Thursday, April 13, 2017

Links for 04-13-2017

Wednesday, April 12, 2017

Podcasts: Talking cloud native projects at CloudNativeCon in Berlin

33697540381 8472d96277 z

Eduardo Silva, Fluentd/Treasure Data

A project within the Cloud Native Computing Foundation, Fluentd is focused on logging, pulling together data from a variety of sources and sending it to a back-end. Eduardo Silva spoke with me at CloudNativeCon in Berlin about Fluentd and its flexible architecture for plug-ins. Fluentd is widely used for tasks like aggregating mobile stats and to understand how games are behaving.

Listen to MP3 (15:10)

Listen to OGG (15:10)

Miek Gieben, CoreDNS

CoreDNS, which provides cloud-native DNS server and service discovery, recently joined the CNCF. In this podcast Miek provides  context about DNS and explains how today’s more dynamic environments aren’t always a good match with traditional approaches to DNS. Miek takes us through how CoreDNS came to be and discusses some possible future paths that it might take.

Listen to MP3 (12:24)

Listen to OGG (12:24)

Björn Rabenstein, Prometheus/SoundCloud

Bjorn Rabenstein of SoundCloud sat down with me at CloudNativeCon in Berlin to discuss Prometheus, the first project to be brought into the Cloud Native Computing Foundation after Kubernetes. Prometheus is a popular open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach. In this podcast, we get into the background behind Prometheus, why new monitoring tools are needed for cloud-native, and when you should wake people up with an alert--and when you shouldn't.

Listen to MP3 (16:38)

Listen to OGG (16:38)

Sarah Novotny, Kubernetes/Google

Sarah Novotny does open source community for Google Cloud and is also the program manager of the Kubernetes community. She has years of experience in open source communities including MySQL and NGINX. In the podcast we cover the challenges inherent in shifting from a company-led project to a community-led one, principles that can lead to more successful communities, and how to structure decision-making.

I’ve written an article with excerpts from this podcast which will appear on opensource.com. I’ll link to it from here when it’s available.

Listen to MP3 (20:54)

Listen to OGG (20:54)

Wednesday, April 05, 2017

Upcoming: MIT Sloan CIO Symposium

MIT CIO 2017 logo final B

My May schedule has been something of a train wreck given a Red Hat Summit in Boston (use code SOC17 for a discount) that’s earlier than usual and generally lots of events in flight. As a result, I didn’t know until a couple of days ago whether I would be able to attend this year’s MIT Sloan CIO Symposium on May 24. I always look forward to going. This is admittedly in part because I get to hop on a train for an hour ride into Cambridge rather than a metal sky tube for many hours.

But it’s also because the event brings together executives who spend a lot of time focusing on the business aspects of technology change. As you’d expect from an MIT event, there’s also a heavy academic component from MIT and elsewhere. Erik Brynjolfsson, Andrew McAfee, and Sandy Pentland are regulars. As I have for the past few years, I’ll be hosting a lunchtime discussion table on a topic TBD as well as covering the event in this blog afterwards. 

Data, security, and IoT at MIT Sloan CIO Symposium 2016

MIT Sloan CIO Symposium 2015: Dealing with Disruption

This year the Symposium will focus on the theme, “The CIO Adventure: Now, Next and… Beyond,” and will provide attendees with a roadmap for the changing digital landscape ahead. Among the associated topics are challenges of digital transformation, talent shortages, executive advancement to the C-suite, and leading-edge research.

Here’s some additional information from the event organizers:

The full agenda is available at www.mitcio.com/agenda. Highlights include:

Kickoff Panel: “Pathways to Future Ready: The Digital Playbook” will discuss a framework for digital transformation and facilitate a conversation on lessons learned from executives leading these transformations. Virtually every company is working on transforming their business for the digital era and this panel will provide a playbook for digital. Featuring Peter Weill, Chairman, MIT Sloan Center for Information Systems Research (CISR); Jim Fowler, Vice President & Chief Information Officer, General Electric; David Gledhill, Group Chief Information Officer and Head of Group Technology & Operations, DBS; and Lucille Mayer, Head of Client Experience Delivery and Global Innovation, BNY Mellon.

Fireside Chat: “Machine | Platform | Crowd: Harnessing Our Digital Future” will be moderated by Jason Pontin, Editor-in-Chief and Publisher of MIT Technology Review and feature Erik Brynjolfsson, Director, and Andy McAfee, Co-Director, of the MIT Initiative on the Digital Economy (IDE), discussing what they call "the second phase of the second machine age." This phase has a greater sense of urgency, as technologies are demonstrating that they can do much more than just the type of work we have thought of as routine. The last time new technologies had such a huge impact on the business world was about a century ago, when electricity took over from steam power and transformed manufacturing. Many successful incumbent companies, in fact most of them, did not survive this transition. This panel will enable CIOs to rethink the balance between minds and machines, between products and platforms, and between the core and the crowd.

Other panel sessions driven by key IT leaders, practitioners, and MIT researchers will include:

“The Cognitive Company: Incremental Present, Transformational Future”; “Cloud Strategies: The Next Level of Digital Transformation”; “The CIO Adventure: Insights from the Leadership Award Finalists”; “Preparing for the Future of Work”; “Expanding the Reach of Digital Innovation”; “Running IT Like a Factory”; “Navigating the Clouds”; “Winning with the Internet of Things”; “Talent Wars in the Digital Age”; “Who’s Really Responsible for Technology?”; “You Were Hacked—Now What?”; “Measuring ROI for Cybersecurity: Is It Real or a Mirage?”; “Putting AI to Work”; “Trusted Data: The Role of Blockchain, Secure Identity, and Encryption”; and “Designing for Digital.”

Friday, March 17, 2017

Links for 03-17-2017

Friday, March 10, 2017

Video: A Short History of Packaging

From Monkigras London 2017

We’re in the middle of big changes in how we bundle up software and deliver it. But packaging didn’t start with software. I take you on a tour of how we’ve packaged up goods for consumption over time and--more importantly--why we did so and what different approaches we’ve taken then and now. The goal of this talk is to take the packaging discussion up a level so to better focus on the fundamental objectives and some of the broad approaches and trade-offs associated with achieving them.

Tuesday, March 07, 2017

Final Open Source Leadership Summit podcast roundup

TahoeIMG 3229

I recorded a number of podcasts at the Open Source Leadership Summit in Lake Tahoe last month. Most of them are with heads of the various foundations under the Linux Foundation. They’re each about 15 minutes long. In addition to the podcasts themselves linked to from the blog posts, five of them have transcripts and two have stories on opensource.com. 

opensource.com

Heather Kirksey, Open Platform for Network Functions Virtualization (OPNFV) 

"Telecom operators are looking to rethink, reimagine, and transform their networks from things being built on proprietary boxes to dynamic cloud applications with a lot more being in software. [This lets them] provision services more quickly, allocate bandwidth more dynamically, and scale out and scale in more effectively."

Mikeal Rogers, node.js

"The shift that we made was to create a support system and an education system to take a user and turn them into a contributor, first at a very low level and educate them to bring them into the committer pool and eventually into the maintainer pool. The end result of this is that we have a wide range of skillsets. Rather than trying to attract phenomenal developers, we're creating new phenomenal developers."

Connections with transcripts

Brian Behlendorf, Hyperledger

 "That's what gets me excited is these positive social impacts that at the same time, are also potentially helping solve structural problems for the business sector. I haven't seen that kind of synergy, that kind of combination of value from these two different things since the early days of the Internet."

Dan Kohn, Cloud Native Computing Foundation (CNCF)

"When you have those developers that feel like their contributions are valued and taken seriously, then there's a whole ecosystem that forms around them, of companies that are interested in offering services to them, employing them, that want to make these services available to other folks. Then a foundation like ours can come up and help make those services available. I really think that, that developer focus is the key thing to keep in mind."

Nicko Van Someren, Core Infrastructure Initiative (CII)

  "Going forwards, we're trying to move to probably put more into the strategic stuff, because we feel like we can get better leverage, more magnification of the effect, if we put money into a tool and the capabilities to use that tool. I think one of the things we're looking at for 2017 is work to improve the usability of a lot of security tools.There's no shortage of great tools for doing static analysis or fuzz testing, but there is often a difficulty in making it easy for you to integrate those into a continuous test process for an open‑source project. Trying to build things to make it easier to deploy the existing open‑source tools is an area in the strategic spin that we want to put a lot into in 2017."

Chris Aniszczyk, Open Container Initiative (OCI)

 "People have learned their lessons, and I think they want to standardize on the thing that will allow the market to grow. Everyone wants containers to be super‑successful, run everywhere, build out the business, and then compete on the actual higher levels, sell services and products around that, and not try to fragment the market in a way where people won't adopt containers, because they're scared that it's not ready, it's a technology that's still [laughs] being developed."

Al Gillen, IDC

 “With container technology and the ability to produce a true cloud‑native application that's running on some kind of a framework which happens to be available on‑prem or in cloud, you suddenly have the ability to move that application on‑prem or off‑prem, or both ‑‑ run in both places at the same time if so you choose ‑‑ and be able to do that in a way that's been unprecedented in our industry."

Connections

In addition to the above podcasts with foundation directors and analysts, I also sat down with Josh Bernstein, VP of Technology, and Clint Kitson, Technical Director for {code} by Dell EMC to talk about open source and communities.

 

 

Friday, February 24, 2017

Podcast: Talking open source and communities with {code} by Dell EMC

Josh Bernstein, VP of Technology, and Clint Kitson, Technical Director for {code} by Dell EMC sat down with me at the Open Source Leadership Summit to talk about their plans for this strategic initiative.

{code} by Dell EMC

Audio:
Link to MP3 (00:13:22)
Link to OGG (00:13:22)

Podcast: Security and Core Infrastructure Initiative with Nicko Van Someren

As the CTO of the Linux Foundation, Nicko Van Someren also heads the Cloud Infrastructure Initiative. The CII was created in the wake of high visibility issues with widely-used but poorly funded open source infrastructure projects. (Most notably, the Heartbleed vulnerability with OpenSSL.) In this podcast, Nicko discusses how the CII works, his strategy moving forward, and how consumers of open source software can improve their security outcomes.

In addition to discussing the CII directly, Nicko also talked about encouraging open source developers to think about security as a high priority throughout the development process--as well as the need to cultivate this sort of thinking, and to get buy-in, across the entire community.

Nicko also offered advice about keeping yourself safe as a consumer of open source. His first point was that you need to know what code you have in your product. His second was to get involved with open source projects that are important to your product because "open source projects fail when the community around them fails."

Core Infrastructure Initiative, which includes links to a variety of resources created by the CII

Audio:
Link to MP3 (00:15:01)
Link to OGG (00:15:01)

Transcript:

Gordon Haff:   I'm sitting here with Nicko van Someren, who's the CTO of the Linux Foundation, and he heads the Core Infrastructure Initiative. Nicko, give a bit of your background, and explain what the CII is?
Nicko van Someren:  Sure. My background's in security. I've been in the industry‑side of security for 20 plus years, but I joined the Linux Foundation a year ago to head up the Core Infrastructure Initiative, which is a program to try and drive improvement in the security outcomes in open‑source projects. In particular, in the projects that underpin an awful lot of the Internet and the businesses that we run on it. The infrastructural components, those bits of open source that we all depend on, even if we don't see them on a day‑to‑day basis.
Gordon:  Around the time that you came in, you've been in the job, what, a little over a year, is that right? There were some pretty high visibility issues with some of that infrastructure.
Nicko:  Yeah, and I think it goes back a couple of years further. Around three years ago, the Core Infrastructure Initiative ‑‑ we call it the CII ‑‑ was set up, largely in the wake of the Heartbleed bug, which impacted nearly 70 percent of the web servers on the planet.
We saw a vulnerability in a major open‑source project, which had very profound impact on people across the board, whether they were in the open‑source community, or whether they were running commercial systems, or whether they were building products on top of open source. All of these people were impacted by this very significant bug.
While the community moved swiftly to fix the bug and get the patch out there, it became very apparent that as the world becomes more dependent on open‑source software, it becomes more and more critical that those who are dependent on it support the development of those projects and support improving the security outcomes of those projects.
Gordon:  Many of the projects that we're talking about there, was a tragedy of the commons sort of situation, where you had a few volunteers ‑‑ not being paid by anyone, asking for donations on their PayPal accounts-- who, in many cases, were responsible for these very critical systems.
Nicko:  Absolutely. Probably trillions of dollars of business were being done in 2014 on Open SSL, and yet in 2013, they received 3,000 bucks worth of donations from industry to support the development of the project. This is quite common for the projects that are under the hood, not the glossy projects that everybody sees.
The flagship projects get a lot of traction with a big community around them, but there's all of this plumbing underneath that is often maintained by very small communities ‑‑ often one or two people ‑‑ without the financial support that comes with having big businesses putting big weight behind them.
Gordon:  What exactly does the CII do? You don't really code, as I understand it.
Nicko:  Well, I code in my spare time, but the CII doesn't develop code itself, for the most part. What we do is, we work to identify at‑risk projects that are high‑impact but low‑engagement.
We try to support those projects with things like doing security audits where appropriate, by occasionally putting engineers directly on coding, often putting resources in architecture and security process to try to help them help themselves by giving them the tools they need to improve security outcomes.
We're funding the development of new security testing tools. We're providing tools to help projects assess themselves against well‑understood security practices that'll help give better outcomes. Then, when they don't meet all the criteria, help them achieve those criteria so that they can get better security outcomes.
Gordon:  In terms of the projects under the CII, how do you think about that? What's the criteria?
Nicko:  We try to take a fairly holistic approach. Sometimes we're investing directly in pieces of infrastructure that we all rely on, things like OpenSSL, Bouncy Castle, GnuPG, or OpenSSH, other security‑centric projects.
But also things like last year, we were funding a couple of initiatives in network time, those components that we're all working with, but we don't necessarily see at the top layer. We're also funding tooling and task framework, so we have been putting money into a project called Frama‑C, which is a framework for C testing.
We've been funding The Fuzzing Project, which is an initiative to do fuzz testing on open‑source projects and find vulnerabilities and report them and get them fixed.
We've been working with the Reproducible Build project to get binary reproducibility of build processes, so the people can be sure that when they download a binary, they know that it matches what would have been built if they downloaded the source.
We're also funding some more educational programs, for instance, the Badging Program allows people to assess themselves against a set of practices which are known good security practices, and they get a little badge for their GitHub project or for their website if they meet those criteria.
We have a Census Project, where we've been pooling different sets of data about the engagement in projects and the level of bug reporting and the quickness of turn‑around of bug fixes, and the impact of those projects in terms of who's dependent on it, and try to synthesize some information about how much risk there is.
Then, publish those risk scores and encourage fixes. We're trying to take a mixture of some fairly tactical approaches, but also have investment in some strategic approaches, which are going to lead to all open‑source projects getting better security outcomes in the long run.
Gordon:  How do you split those? Certainly, some of the projects, particularly early on, it was very tactical, "There's frankly a house fire going on here, and it needs to be put out."
Then, some of the things that you're doing in terms of the assessment checklists and things like that, that feels much more strategic and forward‑looking. How do you balance those two, or if you could put a percentage, even, "Oh, I spend 30 percent of my time doing this?"
Nicko:  That's, of course, the perennial question. We have finite resources and huge need for this. Resource allocation is what I ask input from my board members for how they think. We, historically, have had a fairly even split between the tactical and the strategic.
Going forwards, we're trying to move to probably put more into the strategic stuff, because we feel like we can get better leverage, more magnification of the effect, if we put money into a tool and the capabilities to use that tool. I think one of the things we're looking at for 2017 is work to improve the usability of a lot of security tools.
There's no shortage of great tools for doing static analysis or fuzz testing, but there is often a difficulty in making it easy for you to integrate those into a continuous test process for an open‑source project. Trying to build things to make it easier to deploy the existing open‑source tools is an area in the strategic spin that we want to put a lot into in 2017.
Gordon:  As we also look forward at some of the areas that are developing in this point, Automotive Grade Linux, for example, AirNav's things, there's new vectors of threats coming in, and areas of infrastructure that maybe historically weren't that important from a security perspective are becoming much more so. What's on your radar in that regard?
Nicko:  I think, obviously, one of the biggest issues that we're facing going forwards is with Internet of Things. I think we have been seeing a lot of people forgetting all the things that we've learned in desktop and server security over the years, as they rush into getting things out there, Internet‑connected.
Often, it's easy to have a good idea about Internet‑connecting something and building a service around it. It's less easy to think about the security implications of doing that in a hasty manner.
We've been talking with a number of players in this space about, "How do we adapt some of the programs we've already built for improving the security process in open‑source projects to apply those to the development of IoT devices?" I think that we can do quite a lot in that space, just with the tools we've already got, tuning them to the appropriate community.
Gordon:  Anything else that you'd like to talk about?
Nicko:  One of the biggest issues that we face is improving the security outcomes in open source is to encourage open‑source developers to think about security as a high priority, as high a priority as performance or scalability or usability.
We've got to put security up there as one of the top priority list items. We also have to make sure that, because most open‑source projects get developed in a very collaborative way with a community around them, that you get buy‑in to that taking it as a priority across the whole community.
That's the best first step to getting good security outcomes, is to have people think about security early, have them think about it often, and have them keep it as a top‑of‑mind priority as they go through the development process. If they do that, then you can get very good security outcomes just by using the same practices we use everywhere else in software engineering.
Gordon:  In one of the areas I work around DevOps and continuous integration and application platforms, like one of the terms that's starting to go off currency is a DevSecOps term, and the push‑back of that is, "Oh, we know security needs to be in DevOps." Well, if you know it, it doesn't happen a lot of the time.
Nicko:  I think that's true. I think it's a question of making sure that you have it as a priority. At my last company, I was actively involved in doing high‑security software, but we were using an agile development process.
We managed to square those two by making sure the security was there in the documentation as the definition of done. You couldn't get through the iterative process without making sure that you were keeping the threat models up to date and going through the security reviews.
Code review ought to involve security review as well as just making sure that the tabs are replaced by four spaces. We need to integrate security into the whole process of being a community of developers.
Gordon:  One other final area, and it's probably less under the purview of something like the CII, but as we've been much talking about in this conference, open source has become pervasive, and that's obviously a great thing.
It also means that people are in the position of grabbing a lot of code ‑‑ perfectly legally ‑‑ from all kinds of different repositories and sticking it into their own code, and it may not be the latest version, it may have vulnerabilities.
Nicko:  Absolutely, and I think, key to keeping yourself safe as a consumer of open source...
Well, there are probably two things there. One is you need to know what you've got in your products, whether you built them yourself or whether you brought them in, there's going to be open source in there.
You need to know what packages are in there, you need to know what versions of packages are in there. You need to know how those are going to get updated as the original projects get updated. That whole dependency tracking needs to be something that you think about as part of your security operations process.
The other bit is, get involved. Open‑source projects fail when the community around them fails. If you want a good security outcome from the open‑source projects that you use, get involved. Don't just complain that it doesn't work, come up with a good diagnose bug report and file it.
Maybe produce a patch, and even if you don't produce the patch that gets accepted, you've given them the idea for how to fix it, and they'll go and recode it in their own style. If you're going to be dependent on the security of this project, put an engineer on it.
Get involved in these projects, and that's the way to make sure that you get really good security outcomes, is for people who care about the security of these products to get involved.

Gordon:  Well, I think that's as good a finish as any! Thank you.

Podcast: Open source and cloud trends with IDC's Al Gillen

Al Gillen is responsible for open source research and oversees developer and DevOps research at IDC. Al gave a keynote at the Open Source Leadership Summit at which he provided some historical context for what's happening today in open source and presented recent research on digital transformation, commercial open source support requirements, and how organizations are thinking about cloud-native architecture adoption and deployment.

Listen to the podcast for the whole conversation but a few specific points that Al made were:

Digital transformation can be thought of as taking physically connected systems and logically connecting them, i.e. connecting the processes, the data, and the decision-making.

It's important to bridge new cloud-native systems to existing functionality. Organizations are not going to be rewriting old applications for the most part and those "legacy" systems still have a great deal of value.

Enterprises are asking for open source DevOps tools, but most are specifically asking for commercially-supported open source tools.

Audio:
Link to MP3 (00:15:46)
Link to OGG (00:15:46)

Transcript:

Gordon Haff:  Hi, everyone. Welcome to another edition of the "Cloudy Chat" podcast. I'm here at the Open Source Leadership Summit with Al Gillen of IDC, who gave one of the keynotes this morning. Welcome, Al. How about giving a little background about yourself?
Al Gillen:  Hey, Gordon, thanks a lot. Thanks, everybody for listening. This is Al Gillen. I'm a group vice president at IDC. I'm responsible for our open source research, and oversee our developer and DevOps research.
Gordon:  One of the things you went through in your keynote this morning was the historical perspective of how Linux is developed. Both of us have pretty much been following Linux from the beginning, certainly from its beginnings as something that was interesting commercially. Maybe you could recap that in a couple of minutes or so.
Al:  I actually went back to a presentation I delivered to the Linux Foundation, an event at the Collaboration Summit that was back in 2008. I pulled those slides up, because I was curious. "What can I learn from what we talked about back then, and how does that relate to what's going on in the industry today?"
I went back, and I pulled up the deck. I was looking at some of the things that I thought were really interesting. For example, I was looking at one of the first pieces of data, which compared perceptions of Linux from 1999 and 2001.
Remember what the time frame was there. Linux had only just begun to be commercially accepted in the '99‑2000/2001 time frame. One of the things that served as a significant accelerator for Linux at that time frame was the dot‑com bust.
What happened then is we had a big contraction in the stock market. Most large companies, what they did is they went and they started to cut costs. We all know that one of the places they first cut costs is IT.
Suddenly, the IT departments were charged with standing up new Web servers and new network‑infrastructure servers and so forth, and they had no budget to do it. What did they do?
They went and they got a free copy of Linux. They recycled a well‑equipped PC or x86 server that had been taken out of service, and they turned it into a Linux server.
When we look back at the data that we saw then, really, one of the big drivers for Linux was initial price. People said, "Yeah, it was great. The cost was really low." One of the things that was also amazing was the users back then rated the reliability of Linux as very, very high.
In fact, when you compare it to other operating systems, it compared very favorably to much more mature operating systems. That context was really fascinating, but when you think about it, that was just the beginning of a long gestation period for Linux.
Over the next, what, seven, eight, nine years, as Linux became a truly mature and a truly robust commercial operating system that had both the features, had the application portfolio, and had the customer base to use it, it took, basically a decade to get there.
Gordon:  You've been doing some more research recently. What are your numbers showing today?
Al:  A couple of things that I showed in the presentation today. One is we presented data on Linux operating system shipments. One of the things that's happened over the last few years is that Linux has continued to accelerate, in part because of the build‑out of cloud.
Most of the public cloud infrastructure, with the exception of the Microsoft Azure cloud, is almost all Linux. To the extent that Google continues to build out and Amazon continues to build out, and companies like that ‑‑ Facebook, Twitter, and so forth, it's primarily Linux being stood up.
That has driven the growth of non‑commercial Linux, meaning distributions that are not supported by a commercial company you might think of, rather are either they're CentOS, or they're Debian, or potentially Ubuntu, that's not supported, things like that, as well as Amazon Linux, and Google's own Linux and so forth.
That's been really where a lot of the growth is, but that's not to say that there hasn't been growth in the commercial side of the market. There's been growth there as well.
Gordon:  What are some of the drivers that you hear? I know you did some research for us. You also have some research here around commercially‑supported environments and maybe some of the reasons why people buy those.
Al:  That has been something which has been really consistent through the years. We find that large enterprise organizations have a tendency to prefer commercially‑supported software.
That has always been the case with Linux and yes, we find that there is [also] non‑commercial Linux. You'll talk to any enterprise ‑‑ and you could talk to a really big Red Hat shop or a big SUSE shop, and you ask them, "What is in your infrastructure?" They'll typically tell you that, "Yeah, we're 95 percent or 98 percent Red Hat, but we've also got some CentOS," or, "We've got some Debian," or, "We've got SUSE for this one application."
They generally have a mix of other things in there. The same thing, if you talk to a SUSE shop, where they'll say, "Yeah, we're mostly SUSE, but we've got some openSUSE," or again, "We've got some Ubuntu or CentOS, or something else in the mix."
The reason why is that these things get stood up for work loads that are considered not critical. Maybe they might be something simple like a DNS server, maybe something that's a print file server, or maybe a Web server which is providing some internal Web serving capabilities, something that's not critical if it disappears off the network suddenly. There's not going to be customers that are left hanging.
Gordon:  Let's switch gears and talk about digital transformation. This is one of those terms that I think is almost a cliche at this point, at least when you go to these types of events, because we hear it at every one of these events.
As somebody I was talking to recently said, just because it's cliche doesn't mean it's not an important trend. What are some of the things that IDC is seeing about digital transformation?
Al:  If we go out to, say 2025, and we look back, I believe that we're going to look back and say, "Yeah, the mid‑teen years were important years from a digital transformation perspective."
When we at IDC talk about digital transformation, what we're really talking about is we're talking about the interconnection of all of the systems that are in our environments.
When I say interconnection, we're not talking about getting them all on the same network. We've done that. It's been done for 20 years, already. What we're talking about is interconnecting the processes, interconnecting the data, interconnecting the decision‑making. In many cases, that's not done.
We've got systems that are physically connected, but are not connected from a logical sense. That's what's happening with digital transformation. I might add that the way we expect that that's going to happen is it's going to be a model where we're going to be building new applications that are going to essentially bridge the existing functionality that's on these servers.
We're not going to be rewriting those old applications in a cloud‑native format, for example. We're going to keep those applications. We may wrap them with some consistent API so we can get access to the logic and the data.
But at the end of the day, the business value that's in those applications that are in place today remains valuable, and frankly, it's going to mean that the applications themselves and the servers that they run on are workloads that are going to be around for the long term.
Gordon:  I think that's a really important point, because one of the things that I hear a lot when I talk to customers is the importance of this bridging of the older systems that may be modernizing, but as you say, you're not turning them into cloud‑native systems.
On the other hand, you have these cloud‑native infrastructures. I think probably in the industry, there's too much thinking of those as two disconnected islands, and not enough thinking of the bridges, the integrations, and so forth, between them.
Al:  There's a really good parallel here, and I like to bring this story up, especially when I'm in a room full of end users. I like to say, "You guys remember what you were doing in 1999?" You get a little bit of a quizzical look, and I say, "Were you remediating any applications that had ywo digit date codes?"
People start nodding their heads. The next question I say is, "What did you do? Did you fix them?" and the heads keep nodding. I say, "OK, can I see a show of hands? How many of you have gotten rid of those systems, or do you still have them in use?"
All those same people, they put their hands up, reluctantly, I might add, and say, "Yeah, we still have those systems in use." The point being is that the value of the systems does not go away. The value of the systems is in the data. It's in the processes and the business logic that are coded in those applications.
Going forward, we think the same thing's going to be true for the distributed computing environment. All of the Linux servers that you have in place, all of the Windows servers you have in place, have real important business value. The logic and the data there is really valuable to your business, which means that you're going to want to use that going forward.
I do agree with you, when we build that cloud‑native applications, they're going to help bridge these systems, but don't for a minute assume that those old systems have no value left.
Gordon:  As we talk about the new applications that are going to be required for this digital transformation, what are some of the...I think you even used the term, they were the "pivot point," this morning. Tell us a little bit more about that.
Al:  Again, taking a long view, so if we look out, say, if you go out to 2025, and you look back, I think that we'll be able to draw a line in history and say, "Somewhere between 2015 and 2017 or 2018, there's going to be this line where everything before that line will be considered a legacy application, and everything after that line is probably going to be a cloud‑native and a modern application."
Again, let's not associate the term legacy application with something that has no value. Let's assume that that's an architectural statement more than anything else.
When I think about it, I believe we're right in the midst of this transition where we begin to build all of our applications using a cloud‑native format, which means that our applications are built to run on‑prem in private cloud, or off‑prem in public cloud, which means that we have flexibility on where they run, how we want to run them, how we want to scale them.
The other thing I might add is remember that cloud‑native and cloud‑scale are not necessarily the same thing. There's lots and lots of applications that should be cloud‑native, but not all applications have to have cloud scale.
Take for example your average enterprise. You've got ‑‑ pick your number ‑‑ it's 1 thousand, 10 thousand, 100 thousand users, whatever the number is, that access your business applications. That number does not scale up to 1 million or 10 million overnight.
By comparison, somebody who's doing, say, business to consumer, where there could potentially be a consumer event that causes everybody to come in and access that application. There you'd need to have the ability to do cloud‑level scaling.
Gordon:  Al, going back to the commercial support still being important, you certainly see in the cloud an awful lot of this consumption of free software, but you mentioned earlier that enterprises by and large do want commercialized tools. We're not talking just operating systems here.
Al:  No, in fact, operating systems are probably one of the best understood pieces of open‑source software today. As we go up the stack, customers still see value associated with commercialization, so a company that will take your project and make it something that is consumable will provide the support.
The reason why that's so valuable is that then the company does not have to have the expertise on staff. You don't have to have a kernel guy, you don't have to have a guy that knows how to patch Xen, for example, or a KVM if there's a problem.
It's really important to have that taken care of by somebody if you're a commercial organization, which is in the business of selling widgets or manufacturing things, or providing health care. That's your primary business. Your primary business is not being in the business of IT.
We ran a survey earlier this year. I guess it was actually late in 2016. We were talking to people about their consumption of DevOps products. We asked a question which I think was really interesting.
The question was if they have a chance to buy a product, are they going to look at a product which is open‑source, or are they going to look at a product which is likely to be a closed‑source and/or a proprietary‑type product?
What we found is we asked people to rank their preference on these things. The answer came back as 45 percent of the people we talked to ranked their preference for an open‑source base product as their first choice over anything else.
If you asked them what their preferences were, for example, for a proprietary product, only 15 percent of the people said that was their first choice.
The reason why this is really interesting is that when we look at that, these companies are telling us that they want an open‑source base product, but they also told us that they wanted it to be commercially supported.
They could get the bits and run it as a project themselves. We asked that question as well. That's not what they're asking for. They're asking for it to be commercially supported. The reason why is if it breaks, you pick up the phone or you get on your computer and send an email, and you say, "Fix it."
Gordon:  That was true going back with Linux in the early days as well. I think one of the things that's happening today when you look at DevOps tools is there is an incredible amount of innovation and number of products out there. That's good news.
The bad news is there's an incredible amount of innovation, rapidly changing products, and a need to integrate all of those together.
Al:  You know what, Gordon? It's one of the challenges that we've had with fast‑moving markets like this. When I say markets, I'm referring to open‑source, collectively.
The problem we have is that the technology changes so fast that the people, the end‑user organizations, are not able to gain the skills fast enough to keep up with these technology changes.
Frankly, when we asked questions about Linux in the early 2000s, people said one of their top challenges was having the skill set to support Linux. Today, we find the same questions about things like container technology and using DevOps tools.
Gordon:  To net it all out and close this out, what are some of the recommendations, the guidance, that you give on probably pretty much a daily basis to your clients?
Al:  There's a few things. Number one, recognize that cloud‑native applications are going to be architected very differently than classic applications. That's pretty much a given, but when you think about it, it affects your choice of tools, it affects your choice of deployment scenarios, and it affects your skills that you need to have on staff.
Another thing is recognize that we've moved to an era of platform independence far beyond anything we ever had before. We always like to talk about platform independence, but we've never really had it.
Now, with container technology and the ability to produce a true cloud‑native application that's running on some kind of a framework which happens to be available on‑prem or in cloud, you suddenly have the ability to move that application on‑prem or off‑prem, or both ‑‑ run in both places at the same time if so you choose ‑‑ and be able to do that in a way that's been unprecedented in our industry.
Finally, just to reiterate the other point is recognize that the existing applications don't lose their value. They still have value. Yes, they may get bundled up in a VM, or maybe packaged up in container and dropped into somebody's IS cloud, but they're going to be around for the long term, and recognize that that's something you have to support.

Again, driving home that point I made earlier, recognize that all the new applications we build today are going to have to bridge the classic applications we've had, and the data that those applications support together with the modern things that we're going to be doing with our new applications.