Jon Collins, Author at Gigaom https://gigaom.com/author/joncollins/ Your industry partner in emerging technology research Wed, 15 Jan 2025 10:44:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://gigaom.com/wp-content/uploads/sites/1/2024/05/d5fd323f-cropped-ff3d2831-gigaom-square-32x32.png Jon Collins, Author at Gigaom https://gigaom.com/author/joncollins/ 32 32 Demystifying data fabrics – bridging the gap between data sources and workloads https://gigaom.com/2025/01/15/demystifying-data-fabrics-bridging-the-cap-between-data-sources-and-workloads/ Wed, 15 Jan 2025 10:38:26 +0000 https://gigaom.com/?p=1042037 The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across

The post Demystifying data fabrics – bridging the gap between data sources and workloads appeared first on Gigaom.

]]>
The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across vendors: in autumn last year, British Telecom (BT) talked about their data fabric at an analyst event; meanwhile, in storage, NetApp has been re-orienting their brand to intelligent infrastructure but was previously using the term. Application platform vendor Appian has a data fabric product, and database provider MongoDB has also been talking about data fabrics and similar ideas. 

At its core, a data fabric is a unified architecture that abstracts and integrates disparate data sources to create a seamless data layer. The principle is to create a unified, synchronized layer between disparate sources of data and the workloads that need access to data—your applications, workloads, and, increasingly, your AI algorithms or learning engines. 

There are plenty of reasons to want such an overlay. The data fabric acts as a generalized integration layer, plugging into different data sources or adding advanced capabilities to facilitate access for applications, workloads, and models, like enabling access to those sources while keeping them synchronized. 

So far, so good. The challenge, however, is that we have a gap between the principle of a data fabric and its actual implementation. People are using the term to represent different things. To return to our four examples:

  • BT defines data fabric as a network-level overlay designed to optimize data transmission across long distances.
  • NetApp’s interpretation (even with the term intelligent data infrastructure) emphasizes storage efficiency and centralized management.
  • Appian positions its data fabric product as a tool for unifying data at the application layer, enabling faster development and customization of user-facing tools. 
  • MongoDB (and other structured data solution providers) consider data fabric principles in the context of data management infrastructure.

How do we cut through all of this? One answer is to accept that we can approach it from multiple angles. You can talk about data fabric conceptually—recognizing the need to bring together data sources—but without overreaching. You don’t need a universal “uber-fabric” that covers absolutely everything. Instead, focus on the specific data you need to manage.

If we rewind a couple of decades, we can see similarities with the principles of service-oriented architecture, which looked to decouple service provision from database systems. Back then, we discussed the difference between services, processes, and data. The same applies now: you can request a service or request data as a service, focusing on what’s needed for your workload. Create, read, update and delete remain the most straightforward of data services!

I am also reminded of the origins of network acceleration, which would use caching to speed up data transfers by holding versions of data locally rather than repeatedly accessing the source. Akamai built its business on how to transfer unstructured content like music and films efficiently and over long distances. 

That’s not to suggest data fabrics are reinventing the wheel. We are in a different (cloud-based) world technologically; plus, they bring new aspects, not least around metadata management, lineage tracking, compliance and security features. These are especially critical for AI workloads, where data governance, quality and provenance directly impact model performance and trustworthiness.

If you are considering deploying a data fabric, the best starting point is to think about what you want the data for. Not only will this help orient you towards what kind of data fabric might be the most appropriate, but this approach also helps avoid the trap of trying to manage all the data in the world. Instead, you can prioritize the most valuable subset of data and consider what level of data fabric works best for your needs:

  1. Network level: To integrate data across multi-cloud, on-premises, and edge environments.
  2. Infrastructure level: If your data is centralized with one storage vendor, focus on the storage layer to serve coherent data pools.
  3. Application level: To pull together disparate datasets for specific applications or platforms.

For example, in BT’s case, they’ve found internal value in using their data fabric to consolidate data from multiple sources. This reduces duplication and helps streamline operations, making data management more efficient. It’s clearly a useful tool for consolidating silos and improving application rationalization.

In the end, data fabric isn’t a monolithic, one-size-fits-all solution. It’s a strategic conceptual layer, backed up by products and features, that you can apply where it makes the most sense to add flexibility and improve data delivery. Deployment fabric isn’t a “set it and forget it” exercise: it requires ongoing effort to scope, deploy, and maintain—not only the software itself but also the configuration and integration of data sources.

While a data fabric can exist conceptually in multiple places, it’s important not to replicate delivery efforts unnecessarily. So, whether you’re pulling data together across the network, within infrastructure, or at the application level, the principles remain the same: use it where it’s most appropriate for your needs, and enable it to evolve with the data it serves.

The post Demystifying data fabrics – bridging the gap between data sources and workloads appeared first on Gigaom.

]]>
Making Sense of Cybersecurity – Part 2: Delivering a Cost-effective Response https://gigaom.com/2025/01/09/making-sense-of-cybersecurity-part-2-delivering-a-cost-effective-response/ Thu, 09 Jan 2025 17:34:37 +0000 https://gigaom.com/?p=1041958 At Black Hat Europe last year, I sat down with one of our senior security analysts, Paul Stringfellow. In this section of

The post Making Sense of Cybersecurity – Part 2: Delivering a Cost-effective Response appeared first on Gigaom.

]]>
At Black Hat Europe last year, I sat down with one of our senior security analysts, Paul Stringfellow. In this section of our conversation (you can find the first part here), we discuss balancing cost and efficiency, and aligning security culture across the organization.

Jon: So, Paul, in an environment with problems everywhere, and you’ve got to fix everything, we need to move beyond that. In the new architectures we now have, we need to be thinking smarter about our overall risk. This ties into cost management and service management—being able to grade our architecture in terms of actual risk and exposure from a business perspective.

So, I’m kind of talking myself into needing to buy a tool for this because I think that in order to cut through the 50 tools, I first need a clear view of our security posture. Then, we can decide which of the tools we have actually respond to that posture because we’ll have a clearer picture of how exposed we are.

Paul: Buying a tool goes back to vendors’ hopes and dreams—that one tool will fix everything. But I think the reality is that it’s a mix of understanding what metrics are important. Understanding the information we’ve gathered, what’s important, and balancing that with the technology risk and the business impact. You made a great point before: if something’s at risk but the impact is minimal, we have limited budgets to work with. So where do we spend? You want the most “bang for your buck.”

So, it’s understanding the risk to the business. We’ve identified the risk from a technology point of view, but how significant is it to the business? And is it a priority? Once we’ve prioritized the risks, we can figure out how to address them. There’s a lot to unpack in what you’re asking. For me, it’s about doing that initial work to understand where our security controls are and where our risks lie. What really matters to us as an organization? Go back to the important metrics—eliminating the noise and identifying metrics that help us make decisions. Then, look at whether we’re measuring those metrics. From there, we assess the risks and put the right controls in place to mitigate them. We do that posture management work. Are the tools we have in place responding to that posture? This is just the internal side of things, but there’s also external risk, which is a whole other conversation, but it’s the same process.

So, looking at the tools we have, how effective are they in mitigating the risks we’ve identified? There are lots of risk management frameworks, so you can probably find a good fit, like NIST or something else. Find a framework that works for you, and use that to evaluate how your tools are managing risk. If there’s a gap, look for a tool that fills that gap.

Jon: And I was thinking about the framework because it essentially says there are six areas to address, and maybe a seventh could be important to your organization. But at least having the six areas as a checkbox: Am I dealing with risk response? Am I addressing the right things? It gives you that, not Pareto view, but it’s about diminishing returns—cover the easiest stuff first. Don’t try to fix everything until you’ve fixed the most common issues. That’s what people are trying to do right now.

Paul: Yeah, I think—let me quote another podcast I do, where we do “tech takeaways.” Yeah, who knew? I thought I’d plug it. But if you think about the takeaways from this conversation, I think, you know, going back to your question—what should I be considering as an organization? I think the starting point is probably to take a step back. As a business, as an IT leader inside that business, am I taking a step back to really understand what risk looks like? What does risk look like to the business, and what needs to be prioritized? Then, we need to assess whether we’re capable of measuring our efficacy against that risk. We’re getting lots of metrics and lots of tools. Are those tools effective in helping us avoid the risks we deem important for the business? Once we’ve answered those two questions, we can then look at our posture. Are the tools in place giving us the kind of controls we need to deal with the threats we face? Context is huge.

Jon: On that note, I’m reminded of how organizations like Facebook, for example, had a pretty high tolerance for business risk, especially around customer data. Growth was everything—just growth at all costs. So, they were prepared to manage the risks to achieve that. It ultimately boils down to assessing and taking those risks. At that point, it’s no longer a technical conversation.

Paul: Exactly. It probably never is just a technical conversation. To deliver projects that address risk and security, it should never be purely technical-led. It impacts how the company operates and the daily workflow. If everyone doesn’t buy into why you’re doing it, no security project is going to succeed. You’ll get too much pushback from senior people saying, “You’re just getting in the way. Stop it.” You can’t be the department that just gets in the way. But you do need that culture across the company that security is important. If we don’t prioritize security, all the hard work everyone’s doing could be undone because we haven’t done the basics to ensure there aren’t vulnerabilities waiting to be exploited.

Jon: I’m just thinking about the number of conversations I’ve had with vendors on how to sell security products. You’ve sold it, but then nothing gets deployed because everyone else tries to block it—they didn’t like it. The reality is that the company needs to work towards something and make sure everything aligns to deliver it.

Paul: One thing I’ve noticed over my 30-plus years in this job is how vendors often struggle to explain why they might be valuable to a business. Our COO, Howard Holton, is a big advocate of this argument—that vendors are terrible at telling people what they actually do and where the benefit lies for a business. But one thing he said to me yesterday was about their approach. One representative I know works for a vendor offering an orchestration and automation tool, but when he starts a meeting, the first thing he does is ask why automation hasn’t worked for the customer. Before he pitches his solution, he takes the time to understand where their automation problems are. If more of us did that—vendors and others alike—if we first asked, “What’s not working for you?” maybe we’d get better at finding the things that will work.

Jon: So we have two takeaways for end users – to focus on risk management, and to simplify and refine security metrics. And for vendors, the takeaway is to understand the customer’s challenges before pitching a solution. By listening to the customer’s problems and needs, vendors can provide relevant and effective solutions, rather than simply selling their aspirations. Thanks, Paul!

The post Making Sense of Cybersecurity – Part 2: Delivering a Cost-effective Response appeared first on Gigaom.

]]>
Making Sense of Cybersecurity – Part 1: Seeing Through Complexity https://gigaom.com/2025/01/09/making-sense-of-cybersecurity-part-1-seeing-through-complexity/ Thu, 09 Jan 2025 10:30:05 +0000 https://gigaom.com/?p=1041862 At the Black Hat Europe conference in December, I sat down with one of our senior security analysts, Paul Stringfellow. In this

The post Making Sense of Cybersecurity – Part 1: Seeing Through Complexity appeared first on Gigaom.

]]>
At the Black Hat Europe conference in December, I sat down with one of our senior security analysts, Paul Stringfellow. In this first part of our conversation we discuss the complexity of navigating cybersecurity tools, and defining relevant metrics to measure ROI and risk.

Jon: Paul, how does an end-user organization make sense of everything going on? We’re here at Black Hat, and there’s a wealth of different technologies, options, topics, and categories. In our research, there are 30-50 different security topics: posture management, service management, asset management, SIEM, SOAR, EDR, XDR, and so on. However, from an end-user organization perspective, they don’t want to think about 40-50 different things. They want to think about 10, 5, or maybe even 3. Your role is to deploy these technologies. How do they want to think about it, and how do you help them translate the complexity we see here into the simplicity they’re looking for?

Paul: I attend events like this because the challenge is so complex and rapidly evolving. I don’t think you can be a modern CIO or security leader without spending time with your vendors and the broader industry. Not necessarily at Black Hat Europe, but you need to engage with your vendors to do your job.

Going back to your point about 40 or 50 vendors, you’re right. The average number of cybersecurity tools in an organization is between 40 and 60, depending on which research you refer to. So, how do you keep up with that? When I come to events like this, I like to do two things—and I’ve added a third since I started working with GigaOm. One is to meet with vendors, because people have asked me to. Two, go to some presentations. Three is to walk around the Expo floor talking to vendors, particularly ones I’ve never met, to see what they do. 

I sat in a session yesterday, and what caught my attention was the title: “How to identify the cybersecurity metrics that are going to deliver value to you.” That caught my attention from an analyst’s point of view because part of what we do at GigaOm is create metrics to measure the efficacy of a solution in a given topic. But if you’re deploying technology as part of SecOps or IT operations, you’re gathering a lot of metrics to try and make decisions. One of the things they talked about in the session was the issue of creating so many metrics because we have so many tools that there’s so much noise. How do you start to find out the value?

The long answer to your question is that they suggested something I thought was a really smart approach: step back and think as an organization about what metrics matter. What do you need to know as a business? Doing that allows you to reduce the noise and also potentially reduce the number of tools you’re using to deliver those metrics. If you decide a certain metric no longer has value, why keep the tool that provides it? If it doesn’t do anything other than give you that metric, take it out. I thought that was a really interesting approach. It’s almost like, “We’ve done all this stuff. Now, let’s think about what actually still matters.”

This is an evolving space, and how we deal with it must evolve, too. You can’t just assume that because you bought something five years ago, it still has value. You probably have three other tools that do the same thing by now. How we approach the threat has changed, and how we approach security has changed. We need to go back to some of these tools and ask, “Do we really need this anymore?”

Jon: We measure our success with this, and, in turn, we’re going to change.

Paul: Yes, and I think that’s hugely important. I was talking to someone recently about the importance of automation. If we’re going to invest in automation, are we better now than we were 12 months ago after implementing it? We’ve spent money on automation tools, and none of them come for free. We’ve been sold on the idea that these tools will solve our problems. One thing I do in my CTO role, outside of my work with GigaOm, is to take vendors’ dreams and visions and turn them into reality for what customers are asking for.

Vendors have aspirations that their products will change the world for you, but the reality is what the customer needs at the other end. It’s that kind of consolidation and understanding—being able to measure what happened before we implemented something and what happened after. Can we show improvements, and has that investment had real value?

Jon: Ultimately, here’s my hypothesis: Risk is the only measure that matters. You can break that down into reputational risk, business risk, or technical risk. For example, are you going to lose data? Are you going to compromise data and, therefore, damage your business? Or will you expose data and upset your customers, which could hit you like a ton of bricks? But then there’s the other side—are you spending way more money than you need, to mitigate risks? 

So, you get into cost, efficiency, and so on, but is this how organizations are thinking about it? Because that’s my old-school way of viewing it. Maybe it’s moved on.

Paul: I think you’re on the right track. As an industry, we live in a little echo chamber. So when I say “the industry,” I mean the little bit I see, which is just a small part of the whole industry. But within that part, I think we are seeing a shift. In customer conversations, there’s a lot more talk about risk. They’re starting to understand the balance between spending and risk, trying to figure out how much risk they’re comfortable with. You’re never going to eliminate all risk. No matter how many security tools you implement, there’s always the risk of someone doing something stupid that exposes the business to vulnerabilities. And that’s before we even get into AI agents trying to befriend other AI agents to do malicious things—that’s a whole different conversation.

Jon: Like social engineering?

Paul: Yeah, very much so. That’s a different show altogether. But, understanding risk is becoming more common. The people I speak to are starting to realize it’s about risk management. You can’t remove all the security risks, and you can’t deal with every incident. You need to focus on identifying where the real risks lie for your business. For example, one criticism of CVE scores is that people look at a CVE with a 9.8 score and assume it’s a massive risk, but there’s no context around it. They don’t consider whether the CVE has been seen in the wild. If it hasn’t, then what’s the risk of being the first to encounter it? And if the exploit is so complicated that it’s not been seen in the wild, how realistic is it that someone will use it?

It’s such a complicated thing to exploit that nobody will ever exploit it. It has a 9.8, and it shows up on your vulnerability scanner saying, “You really need to deal with this.” The reality is that you have already seen a shift where there’s no context applied to that—if we’ve seen it in the wild.

Jon: Risk equals probability multiplied by impact. So you’re talking about probability and then, is it going to impact your business? Is it affecting a system used for maintenance once every six months, or is it your customer-facing website? But I’m curious because back in the 90s, when we were doing this hands-on, we went through a wave of risk avoidance, then went to, “We’ve got to stop everything,” which is what you’re talking about, through to risk mitigation and prioritizing risks, and so on. 

But with the advancement of the Cloud and the rise of new cultures like agile in the digital world, it feels like we’ve gone back to the direction of, “Well, you need to prevent that from happening, lock all the doors, and implement zero trust.” And now, we’re seeing the wave of, “Maybe we need to think about this a bit smarter.”

Paul: It’s a really good point, and actually, it’s an interesting parallel you raise. Let’s have a little argument while we’re recording this. Do you mind if I argue with you? I’ll question your definition of zero trust for a moment. So, zero trust is often seen as something trying to stop everything. That’s probably not true of zero trust. Zero trust is more of an approach, and technology can help underpin that approach. Anyway, that’s a personal debate with myself. But, zero trust…

Now, I’ll just crop myself in here later and argue with myself. So, zero trust… If you take it as an example, it’s a good one. What we used to do was implicit trust—you’d log on, and I’d accept your username and password, and everything you did after that, inside the secure bubble, would be considered valid with no malicious activity. The problem is, when your account is compromised, logging in might be the only non-malicious thing you’re doing. Once logged in, everything your compromised account tries to do is malicious. If we’re doing implicit trust, we’re not being very smart.

Jon: So, the opposite of that would be blocking access entirely?

Paul: That’s not the reality. We can’t just stop people from logging in. Zero trust allows us to let you log on, but not blindly trust everything. We trust you for now, and we continuously evaluate your actions. If you do something that makes us no longer trust you, we act on that. It’s about continuously assessing whether your activities are appropriate or potentially malicious and then acting accordingly.

Jon: It’s going to be a very disappointing argument because I agree with everything you say. You argued with yourself more than I’m going to be able to, but I think, as you said, the castle defense model—once you’re in, you’re in. 

I’m mixing two things there, but the idea is that once you’re inside the castle, you can do whatever you like. That’s changed. 

So, what to do about it? Read Part 2, for how to deliver a cost-effective response. 

The post Making Sense of Cybersecurity – Part 1: Seeing Through Complexity appeared first on Gigaom.

]]>
Bridging Wireless and 5G https://gigaom.com/2024/12/18/bridging-wireless-and-5g/ Wed, 18 Dec 2024 16:18:15 +0000 https://gigaom.com/?p=1040922 Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I

The post Bridging Wireless and 5G appeared first on Gigaom.

]]>
Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I spoke to Bruno Tomas, CTO of the Wireless Broadband Alliance (WBA), to get his insights on convergence, collaboration, and the road ahead.

Q: Bruno, could you start by sharing a bit about your background and your role at the WBA?

Bruno: Absolutely. I’m an engineer by training, with degrees in electrical and computer engineering, as well as a master’s in telecom systems. I started my career with Portugal Telecom and later worked in Brazil, focusing on network standards. About 12 years ago, I joined the WBA, and my role has been centered on building the standards for seamless interoperability and convergence between Wi-Fi, 3G, LTE, and now 5G. At the WBA, we bring together vendors, operators, and integrators to create technical specifications and guidelines that drive innovation and usability in wireless networks.

Q: What are the key challenges in achieving seamless integration between wireless technologies and 5G?

Bruno: One of the biggest challenges is ensuring that our work translates into real-world use cases—particularly in enterprise and public environments. For example, in manufacturing or warehousing, where metal structures and interference can disrupt connectivity, we need robust solutions for starters. At the WBA, we’ve worked with partners from the vendor, chipset and device communities, as well as integrators, to address these challenges by building field-tested guidelines. On top of that comes innovation. For instance, our OpenRoaming concepts help enable seamless transitions between networks, including IoT, reducing the complexity for IT managers and CIOs.

Q: Could you explain how WBA’s “Tiger Teams” contribute to these solutions?

Bruno: Tiger Teams are specialized working groups within our alliance. They bring together technical experts from companies such as AT&T, Intel, Broadcom, and AirTies to solve specific challenges collaboratively. For instance, in our 5G & Wi-Fi convergence group, members define requirements and scenarios for industries like aerospace or healthcare. By doing this, we ensure that our recommendations are practical and field-ready. This collaborative approach helps drive innovation while addressing real-world challenges.

Q: You mentioned OpenRoaming earlier. How does that help businesses and consumers?

Bruno: OpenRoaming simplifies connectivity by allowing users to seamlessly move between Wi-Fi and cellular networks without needing manual logins or configurations. Imagine a hospital where doctors move between different buildings while using tablets for patient care, supported by an enhanced security layer. With OpenRoaming, they can stay connected without interruptions. Similarly, for enterprises, it minimizes the need for extensive IT support and reduces costs while ensuring high-quality service.

Q: What’s the current state of adoption for technologies like 5G and Wi-Fi 6?

Bruno: Adoption is growing rapidly, but it’s uneven across regions. Wi-Fi 6 has been a game-changer, offering better modulation and spectrum management, which makes it ideal for high-density environments like factories or stadiums. On the 5G side, private networks have been announced, especially in industries like manufacturing, but the integration with existing systems remains a hurdle. In Europe, regulatory and infrastructural challenges slow things down, while the U.S. and APAC regions are moving faster.

Q: What role do you see AI playing in wireless and 5G convergence?

Bruno: AI is critical for optimizing network performance and making real-time decisions. At the WBA, we’ve launched initiatives to incorporate AI into wireless networking, helping systems predict and adapt to user needs. For instance, AI can guide network steering—deciding whether a device should stay on Wi-Fi or switch to 5G based on signal quality and usage patterns. This kind of automation will be essential as networks become more complex.

Q: Looking ahead, what excites you most about the future of wireless and 5G?

Bruno: The potential for convergence to enable new use cases is incredibly exciting. Whether it’s smart cities, advanced manufacturing, or immersive experiences with AR and VR, the opportunities are limitless. Wi-Fi 7, will bring even greater capacity and coverage, making it possible to deliver gigabit speeds in dense environments like stadiums or urban centers. Conversely, we are starting to look into 6G. One trend is clear: Wi-Fi should be integrated within a 6G framework, enabling densification. At the WBA, we’re committed to ensuring these advancements are accessible, interoperable, and sustainable.

Thank you, Bruno! 

N.B. The WBA Industry Report 2025 has now been released and is available for download. Please click here for further information.

The post Bridging Wireless and 5G appeared first on Gigaom.

]]>
Making FinOps Matter https://gigaom.com/2024/11/27/making-finops-matter/ Wed, 27 Nov 2024 12:27:40 +0000 https://gigaom.com/?p=1040337 In principle, FinOps – the art and craft of understanding and reducing costs of cloud (and other) services – should be an

The post Making FinOps Matter appeared first on Gigaom.

]]>
In principle, FinOps – the art and craft of understanding and reducing costs of cloud (and other) services – should be an easy win. Many organizations are aware they are spending too much on cloud-based workloads, they just don’t know how much. So surely it’s a question of just finding out and sorting it, right? I’m not so sure. At the FinOpsX event held in Barcelona last week, a repeated piece of feedback from end-user organizations was how hard it was to get FinOps initiatives going. 

While efforts may be paying off at an infrastructure cost management level, engaging higher up in the organization (or across lines of business) can be a wearying and fruitless task. So, what steps can you take to connect with the people who matter, whose budgets stand to benefit from spending less, or who can reallocate spending to more useful activities? 

Here’s my six-point plan, based on a principle I’ve followed through the years – that innovation means change, which needs change management. Feedback welcome, as well as any examples of success you have seen. 

  1. Map Key Stakeholders

Before you do anything else, consider conducting a stakeholder analysis to identify who will benefit from FinOps efforts. Senior finance stakeholders may care about overall efficiency, but it’s crucial to identify specific people and roles that are directly impacted by cloud spend overruns. For example, some in the organization (such as research areas or testing teams) may be resource-constrained and could always use more capacity, whereas others could benefit from budget reallocation onto other tasks.  Line of business leaders often need new services, but may struggle with budget approvals.

The most impacted individuals can become your strongest advocates in supporting FinOps initiatives, particularly if you help them achieve their goals. So, identify who interacts with cloud spending and IT budgets and who stands to gain from budget reallocation. Once mapped, you’ll have a clear understanding of who to approach with FinOps proposals.

  1. Address Complacency with Data

If you encounter resistance, look for ways to illustrate inefficiencies using hard data. Identifying obvious “money pits”—projects or services that consume funds unnecessarily—can reveal wasteful spending, often due to underutilized resources, lack of oversight, or historical best intentions. These may become apparent without needing to seek approval to look for them first, but can be very welcome revelations when they come. 

For example, instances where machines or services are left running without purpose, burning through budget for no reason, can be reported to the budget holders. Pointing out such costs can emphasize the urgency and need for FinOps practices, providing a solid case for adopting proactive cost-control measures.

  1. Focus Beyond Efficiency to Effectiveness, and More

It’s important to shift FinOps goals from mere cost-saving measures to an effectiveness-driven approach. Efficiency typically emphasizes cutting costs, while effectiveness focuses on improving business-as-usual activity. If you can present a case for how the business stands to gain from FinOps activity (rather than just reducing waste), you can create a compelling case. 

There’s also value in showcasing the potential for “greenfield” opportunities, where FinOps practices unlock the potential for growth. Imagine investing in a funding reserve to fund innovation, experiments, or new applications and services – this idea can be applied as part of an overall portfolio management approach to technology spend/reward. With FinOps, you can manage resources effectively while building avenues for longer-term success and organizational resilience.

  1. Jump Left, Don’t Just Shift Left

Shifting left and focusing on the design and architecture phases of a project is a worthy goal, but perhaps you shouldn’t wait to be invited. Look for opportunities to participate in early discussions about new applications or workloads, not (initially) to have a direct influence, but to listen and learn about what is coming down the pipe, and to start planning for what FinOps activity needs to cover. 

By identifying cost-control opportunities in advance, you might be able to propose, and implement preemptive measures to prevent expenses from spiraling. Even if you can’t make a direct contribution, you can start to get visibility onto the project roadmap, allowing you to anticipate what’s coming and stay ahead. Plus, you can build relationships and grow your knowledge of stakeholder needs. 

  1. Make the Internal Case for FinOps

Being clear about the value of FinOps is crucial for securing buy-in. Use hard data, like external case studies or specific savings percentages, to illustrate the impact FinOps can have—and present this compellingly. Highlight successful outcomes from similar organizations, together with hard numbers to show that FinOps practices can drive significant cost savings. As with all good marketing, this is a case of “show, don’t tell.”

Develop targeted marketing materials that resonate with the key stakeholders you have mapped, from the executive board down—demonstrating how FinOps benefits not only the organization but also their individual goals. This can create a compelling case for them to become advocates and actively support FinOps efforts.

  1. Become the FinOps Champion

For FinOps to succeed, it needs a dedicated champion. If no one else is stepping up, perhaps it is you! You may not need to take the world on your shoulders, but still consider how you can become a driving force behind FinOps in your organization. 

Start by creating a vision for FinOps adoption. Consider your organization’s level of FinOps maturity, and propose a game plan with achievable steps that can help the business grow and evolve. Then, share with your direct leadership to create measurable goals for yourself and the whole organization. 

Use the principles here, and speak to others in the FinOps Foundation community to understand how to make a difference. At the very least, you will have created a concrete platform for the future, which will have been a great learning experience. And at the other end of the scale, you may already be in a position to drive significant and tangible value for your business. 

The post Making FinOps Matter appeared first on Gigaom.

]]>
GigaOm Research Bulletin #010 https://gigaom.com/2024/11/22/gigaom-research-bulletin-010/ Fri, 22 Nov 2024 17:22:17 +0000 https://gigaom.com/?p=1040169 This bulletin is aimed at our analyst relations connections and vendor subscribers, to update you on the research we are working on,

The post GigaOm Research Bulletin #010 appeared first on Gigaom.

]]>

This bulletin is aimed at our analyst relations connections and vendor subscribers, to update you on the research we are working on, reports we have published, and improvements we have been making. Please do reach out if you have any questions!

CEO Speaks podcast with Ben Book

In our CEO Speaks podcast, our CEO, Ben Book, discusses leadership challenges and the technology market landscape with vendor CEOs. In the latest edition, he speaks to James Winebrenner, CEO of Elisity. As always, please get in touch if you would like to propose your own CEO.

The Good, Bad, and The Techy podcast

In this, more engineering-focused podcast, Howard Holton and Jon Collins sit down with Tyler Reese, Director of Product Management at Netwrix, to discuss the challenges and best practices faced when deploying Identity Security. Do give it a listen, and again, we welcome any suggestions for guests.

Research Highlights

See below for our most recent reports, blogs and articles, and where to meet our analysts in the next few months.

Trending: Enterprise Object Storage is one of our top Radar reads right now. “Unlike traditional block-based storage systems, object storage is optimized for large-scale data repositories, making it ideal for big data, IoT, and cloud-native applications” says authors, Kirk Ryan and Whit Walters.

We are currently taking briefings on: Kubernetes for Edge Computing, Cloud FinOps, Kubernetes Resource Management, Unstructured Data Management, Cloud Networking, Identity & Access Management, Deception Technologies, Enterprise Firewall, Data Lake, and GitOps.

You can keep tabs on the GigaOm research calendar here.

Recent Reports

We’ve released 17 reports since the last bulletin.

In Analytics and AI, we have a report on Data ObservabilitySemantic Layers and Metric Stores and Data Catalogs.

For Cloud Infrastructure and Operations, we have Hybrid Cloud Data Protection and AIOps. In Storage, we have covered Cloud-Native Globally Distributed File Systems.

In the Security domain, we have released reports on SaaS Security Posture Management (SSPM)Secure Enterprise BrowsingData Loss Prevention (DLP)Continuous Vulnerability Management (CVM)Insider Risk ManagementAutonomous Security Operations Center (SOC) SolutionsSecurity Orchestration, Automation and Response (SOAR), and Cloud-Native Application Protection Platforms (CNAPPS).

In Networking, we have covered DDI (DNS, DHCP, and IPAM).

And in Software and Applications, we have a report on E-Discovery and Intelligent Document Processing (IDP).

Blogs and Articles

Our COO, Howard Holton, offers a four-part blog series on “How to CIO”:

Other blogs include:

Meanwhile Jon talks about Operations Leadership Lessons from the Crowdstrike Incident and DevOps, LLMs and the Software Development Singularity and asks 5 questions of Carsten Brinkschulte at Dryad, covering the use of IoT in forest fire prevention.

Quoted in the Press

GigaOm analysts are quoted in a variety of publications. Recently, we were name-checked in the following:

Where To Meet GigaOm Analysts

In the next few months you can expect to see our analysts at AWS re:Invent, Black Hat London and MWC Barcelona. Do let us know if you want to fix a meet!

To send us your news and updates, please add analystconnect@gigaom.com to your lists, and get in touch with any questions. Thanks!

The post GigaOm Research Bulletin #010 appeared first on Gigaom.

]]>
Navigating Technological Sovereignty in the Digital Age https://gigaom.com/2024/11/22/navigating-technological-sovereignty-in-the-digital-age/ Fri, 22 Nov 2024 17:15:57 +0000 https://gigaom.com/?p=1040167 Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with.

The post Navigating Technological Sovereignty in the Digital Age appeared first on Gigaom.

]]>
Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with. So, should it matter to you and your organization? Let’s first consider what’s driving it, not least the crystal in the solute of the US Cloud Act, which ostensibly gives the US government access to any data managed by a US provider. This spooked EU authorities and nations, as well as others who saw it as a step too far. 

Whilst this accelerated activity across Europe, Africa and other continents, moves were already afoot to preserve a level of sovereignty across three axes: data movement, local control, and what is increasingly seen as the big one – a desire for countries to develop and retain skills and innovate, rather than being passive participants in a cloud-based brain drain. 

This is impacting not just government departments and their contractors, but also suppliers to in-country companies. A couple of years ago, I spoke to a manufacturing materials organization in France that provided goods to companies in Nigeria. “What’s your biggest headache,” I asked the CIO as a conversation starter. “Sovereignty,” he said. “If I can’t show my clients how I will keep data in-country, I can’t supply my goods.”

Legislative themes like the US Cloud Act have made cross-border data management tricky. With different countries enforcing different laws, navigating where and how your data is stored can become a significant challenge. If it matters to you, it really matters. In principle, technological sovereignty solves this, but there’s no single, clear definition. It’s a concept that’s easy to understand at a high level, but tricky to pin down.

Technological sovereignty is all about ensuring you have control over your digital assets—your data, infrastructure, and the systems that run your business. But it’s not just about knowing where your data is stored. It’s about making sure that data is handled in a way that aligns with the country’s regulations and your business strategy and values.

For organizations in Europe, the rules and regs are quite specific. The upcoming EU Data Act focuses on data sharing and access across different sectors, whilst the AI Act introduces rules around artificial intelligence systems. Together, these evolving regulations are pushing organizations to rethink their technology architectures and data management strategies.

As ever, this means changing the wheels on a moving train. Hybrid/multi-cloud environments and complex data architectures add layers of complexity, whilst artificial intelligence is transforming how we interact with and manage data. AI is a sovereignty blessing and a curse – it can both enable data to be handled more effectively, but as AI models become more sophisticated, organizations need to be even more careful about how they process data from a compliance perspective. 

So, where does this leave organizations that want the flexibility of cloud services but need to maintain control over their data? Organizations have several options:

  • Sovereign Hyper-Scalers: Over the next year, cloud giants like AWS and Azure will be rolling out sovereign cloud offerings tailored to the needs of organizations that require stricter data controls. 
  • Localized Providers: Working with local managed service providers (MSPs) can give organizations more control within their own country or region, helping them keep data close to home.
  • On-premise Solutions: This is the go-to option if you want full control. However, on-premise solutions can be costly and come with their own set of complexities. It’s about balancing control with practicality.

The likelihood is a combination of all three will be required, at least in the short-medium term. Inertia will play its part: given that it’s already a challenge to move existing workloads beyond the lower-hanging fruit into the cloud, sovereignty creates yet another series of reasons to leave them where they are, for better or worse. 

There’s a way forward for sovereignty as both a goal and a burden, centered on the word governance. Good governance is about setting clear policies for how your data and systems are managed, who has access, and how you stay compliant with regulations for both your organization and your customers. This is a business-wide responsibility: every level of your organization should be aligned on what sovereignty means for your company and how you will enforce it. 

This may sound onerous to the point of impossibility, but that is the nature of governance, compliance and risk (GRC) – the trick is to assess, prioritize and plan, building sovereignty criteria into the way the business is designed. Want to do business in certain jurisdictions? If so, you need to bake their requirements into your business policies, which can then be rolled out into your application, data and operational policies. 

Get this the other way around, and it will always be harder than necessary. However, done right, technological sovereignty can also offer a competitive advantage. Organizations with a handle on their data and systems can offer their customers more security and transparency, building trust. By embedding sovereignty into your digital strategy, you’re not just protecting your organization—you’re positioning yourself as a leader in responsible business, and building a stronger foundation for growth and innovation. 

Technological sovereignty should be a strategic priority for any organization that wants to stay ahead in today’s complex digital landscape. It’s not just about choosing the right cloud provider or investing in the latest security tools—it’s about building a long-term, business-driven strategy that ensures you stay in control of your data, wherever in the world it is.

The future of sovereignty is about balance. Balancing cloud and on-premise solutions, innovation and control, and security with flexibility. If you can get that balance right, you’ll be in a strong position to navigate whatever the digital world throws at you next.

 

The post Navigating Technological Sovereignty in the Digital Age appeared first on Gigaom.

]]>
DevOps, LLMs, and the Software Development Singularity https://gigaom.com/2024/11/07/devops-llms-and-the-software-development-singularity/ Thu, 07 Nov 2024 16:17:08 +0000 https://gigaom.com/?p=1039840 A Brief History of DevOps To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>
A Brief History of DevOps

To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level of experience. In the late ’90s, I was a DSDM (Dynamic Systems Development Methodology) trainer. DSDM was a precursor to agile, a response to the slow, rigid structures of waterfall methodologies. With waterfall, the process was painstakingly slow: requirements took months, design took weeks, coding seemed endless, and then came testing, validation, and user acceptance—all highly formalized. 

While such structure was seen as necessary to avoid mistakes, by the time development was halfway done, the world had often moved on, and requirements had changed. I remember when we’d built bespoke systems, only for a new product to launch with graphics libraries that made our custom work obsolete. A graphics tool called “Ilog,” for instance, was bought by IBM and replaced an entire development need. This exemplified the need for a faster, more adaptive approach.

New methodologies emerged to break the slow pace. In the early ’90s, rapid application development and the spiral methodology—where you’d build and refine repeated prototypes—became popular. These approaches eventually led to methodologies like DSDM, built around principles like time-boxing and cross-functional teams, with an unspoken “principle” of camaraderie—hard work balanced with hard play.

Others were developing similar approaches in different organizations, such as the Select Perspective developed by my old company, Select Software Tools (notable for its use of the Unified Modelling Language and integration of business process modelling). All of these efforts paved the way for concepts that eventually inspired Gene Kim et al’s The Phoenix Project, which paid homage to Eli Goldratt’s The Goal. It tackled efficiency and the need to keep pace with customer needs before they evolved past the original specifications.

In parallel, object-oriented languages were added to the mix, helping by building applications around entities that stayed relatively stable even if requirements shifted (hat tip to James Rumbaugh). So, in an insurance application, you’d have objects like policies, claims, and customers. Even as features evolved, the core structure of the application stayed intact, speeding things up without needing to rebuild from scratch.

Meanwhile, along came Kent Beck and extreme programming (XP), shifting focus squarely to the programmer, placing developers at the heart of development. XP promoted anti-methodologies, urging developers to throw out burdensome, restrictive approaches and instead focus on user-driven design, collaborative programming, and quick iterations. This fast-and-loose style had a maverick, frontier spirit to it. I remember meeting Kent for lunch once—great guy.

The term “DevOps” entered the software world in the mid-2000s, just as new ideas like service-oriented architectures (SOA) were taking shape. Development had evolved from object-oriented to component-based, then to SOA, which aligned with the growing dominance of the internet and the rise of web services. Accessing parts of applications via web protocols brought about RESTful architectures.

The irony is that as agile matured further, formality snuck back in with methodologies like the Scaled Agile Framework (SAFe) formalizing agile processes. The goal remained to build quickly but within structured, governed processes, a balancing act between speed and stability that has defined much of software’s recent history.

The Transformative Effect of Cloud

Then, of course, came the cloud, which transformed everything again. Computers, at their core, are entirely virtual environments. They’re built on semiconductors, dealing in zeros and ones—transistors that can be on or off, creating logic gates that, with the addition of a clock, allow for logic-driven processing. From basic input-output systems (BIOS) all the way up to user interfaces, everything in computing is essentially imagined.

It’s all a simulation of reality, giving us something to click on—like a mobile phone, for instance. These aren’t real buttons, just images on a screen. When we press them, it sends a signal, and the phone’s computer, through layers of silicon and transistors, interprets it. Everything we see and interact with is virtual, and it has been for a long time.

Back in the late ’90s and early 2000s, general-use computers advanced from running a single workload on each machine to managing multiple “workloads” at once. Mainframes could do this decades earlier—you could allocate a slice of the system’s architecture, create a “virtual machine” on that slice, and install an operating system to run as if it were a standalone computer. 

Meanwhile, other types of computers also emerged—like the minicomputers from manufacturers such as Tandem and Sperry Univac. Most have since faded away or been absorbed by companies like IBM (which still operates mainframes today). Fast forward about 25 years, and we saw Intel-based or x86 architectures first become the “industry standard” and then develop to the point where affordable machines could handle similarly virtualized setups.

This advancement sparked the rise of companies like VMware, which provided a way to manage multiple virtual machines on a single hardware setup. It created a layer between the virtual machine and the physical hardware—though, of course, everything above the transistor level is still virtual. Suddenly, we could run two, four, eight, 16, or more virtual machines on a single server.

The virtual machine model eventually laid the groundwork for the cloud. With cloud computing, providers could easily spin up virtual machines to meet others’ needs in robust, built-for-purpose data centers. 

However, there was a downside: applications now had to run on top of a full operating system and hypervisor layer for each virtual machine, which added significant overhead. Having five virtual machines meant running five operating systems—essentially a waste of processing power.

The Rise of Microservices Architectures

Then, around the mid-2010s, containers emerged. Docker, in particular, introduced a way to run application components within lightweight containers, communicating with each other through networking protocols. Containers added efficiency and flexibility. Docker’s “Docker Swarm” and later, Google’s Kubernetes helped orchestrate and distribute these containerized applications, making deployment easier and leading to today’s microservices architectures. Virtual machines still play a role today, but container-based architectures have become more prominent. With a quick nod to other models such as serverless, in which you can execute code at scale without worrying about the underlying infrastructure—it’s like a giant interpreter in the cloud.

All such innovations gave rise to terms like “cloud-native,” referring to applications built specifically for the cloud. These are often microservices-based, using containers and developed with fast, agile methods. But despite these advancements, older systems still exist: mainframe applications, monolithic systems running directly on hardware, and virtualized environments. Not every use case is suited to agile methodologies; certain systems, like medical devices, require careful, precise development, not quick fixes. Google’s term, “continuous beta,” would be the last thing you’d want in a critical health system.

And meanwhile, we aren’t necessarily that good at the constant dynamism of agile methodologies. Constant change can be exhausting, like a “supermarket sweep” every day, and shifting priorities repeatedly is hard for people. That’s where I talk about the “guru’s dilemma.” Agile experts can guide an organization, but sustaining it is tough. This is where DevOps often falls short in practice. Many organizations adopt it partially or poorly, leaving the same old problems unsolved, with operations still feeling the brunt of last-minute development hand-offs. Ask any tester. 

The Software Development Singularity

And that brings us to today, where things get interesting with AI entering the scene. I’m not talking about the total AI takeover, the “singularity” described by Ray Kurzweil and his peers, where we’re just talking to super-intelligent entities. Two decades ago, that was 20 years away, and that’s still the case. I’m talking about the practical use of large language models (LLMs). Application creation is rooted in languages, from natural language used to define requirements and user stories, through the structured language of code, to “everything else” from test scripts to bills of materials; LLMs are a natural fit for software development. 

Last week, however, at GitHub Universe in San Francisco, I saw what’s likely the dawn of a “software development singularity”—where, with tools like GitHub Spark, we can type a prompt for a specific application, and it gets built. Currently, GitHub Spark is at an early stage – it can create simpler applications with straightforward prompts. But this will change quickly. First, it will evolve to build more complex applications with better prompts. Many applications have common needs—user login, CRUD operations (Create, Read, Update, Delete), and workflow management. While specific functions may differ, applications often follow predictable patterns. So, the catalog of applications that can be AI-generated will grow, as will their stability and reliability.

That’s the big bang news: it’s clear we’re at a pivotal point in how we view software development. As we know, however, there’s more to developing software than writing code. LLMs are being applied in support of activities across the development lifecycle, from requirements gathering to software delivery:

  • On the requirements front, LLMs can help generate user stories and identify key application needs, sparking conversations with end-users or stakeholders. Even if high-level application goals are the same, each organization has unique priorities, so AI helps tailor these requirements efficiently. This means fewer revisions, whilst supporting a more collaborative development approach.
  • AI also enables teams to move seamlessly from requirements to prototypes. With tools such as GitHub Spark, developers can easily create wireframes or initial versions, getting feedback sooner and helping ensure the final product aligns with user needs. 
  • LLM also supports testing and code analysis—a labor-intensive and burdensome part of software development. For instance, AI can suggest comprehensive test coverage, create test environments, handle much of the test creation, generate relevant test data, and even help decide when enough testing is sufficient, reducing the costs of test execution. 
  • LLMs and machine learning have also started supporting fault analysis and security analytics, helping developers code more securely by design. AI can recommend architectures, models and libraries that offer lower risk, or fit with compliance requirements from the outset.
  • LLMs are reshaping how we approach software documentation, which is often a time-consuming and dull part of the process. By generating accurate documentation from a codebase, LLMs can reduce the manual burden whilst ensuring that information is up-to-date and accessible. They can summarize what the code does, highlighting unclear areas that might need a closer look.
  • One of AI’s most transformative impacts lies in its ability to understand, document, and migrate code. LLMs can analyze codebases, from COBOL on mainframes to database stored procedures, helping organizations understand what’s vital, versus what’s outdated or redundant. In line with Alan Turing’s foundational principles, AI can convert code from one language to another by interpreting rules and logic.
  • For project leaders, AI-based tools can analyze developer activity and provide readable recommendations and insights to increase productivity across the team. 

AI is becoming more than a helper—it’s enabling faster, more iterative development cycles. With LLMs able to shoulder many responsibilities, development teams can allocate resources more effectively, moving from monotonous tasks to more strategic areas of development.

AI as a Development Accelerator

As this (incomplete) list suggests, there’s still plenty to be done beyond code creation – with activities supported and augmented by LLMs. These can automate repetitive tasks and enable efficiency in ways we haven’t seen before. However, complexities in software architecture, integration, and compliance still require human oversight and problem-solving.

Not least because AI-generated code and recommendations aren’t without limitations. For example, while experimenting with LLM-generated code, I found ChatGPT recommending a library with function calls that didn’t exist. At least, when I told it about its hallucination, it apologized! Of course, this will improve, but human expertise will be essential to ensure outputs align with intended functionality and quality standards.

Other challenges stem from the very ease of creation. Each piece of new code will require configuration management, security management, quality management and so on. Just as with virtual machines before, we have a very real risk of auto-created application sprawl. The biggest obstacles in development—integrating complex systems, or minimizing scope creep—are challenges that AI is not yet fully equipped to solve.

Nonetheless, the gamut of LLMs stands to augment how development teams and their ultimate customers – the end-users – interact. It begs the question, “Whence DevOps?” keeping in mind that agile methodologies emerged because their waterfall-based forebears were too slow to keep up. I believe such methodologies will evolve, augmented by AI-driven tools that guide workflows without needing extensive project management overhead. 

This shift enables quicker, more structured delivery of user-aligned products, maintaining secure and compliant standards without compromising speed or quality. We can expect a return to waterfall-based approaches, albeit where the entire cycle takes a matter of weeks or even days. 

In this new landscape, developers evolve from purist coders to facilitators, orchestrating activities from concept to delivery. Within this, AI might speed up processes and reduce risks, but developers will still face many engineering challenges—governance, system integration, and maintenance of legacy systems, to name a few. Technical expertise will remain essential for bridging gaps AI cannot yet cover, such as interfacing with legacy code, or handling nuanced, highly specialized scenarios.

LLMs are far from replacing developers. In fact, given the growing skills shortage in development, they quickly become a necessary tool, enabling more junior staff to tackle more complex problems with reduced risk. In this changing world, building an application is the one thing keeping us from building the next one. LLMs create an opportunity to accelerate not just pipeline activity, but entire software lifecycles. We might, and in my opinion should, see a shift from pull requests to story points as a measure of success. 

The Net-Net for Developers and Organizations

For development teams, the best way to prepare is to start using LLMs—experiment, build sample applications, and explore beyond the immediate scope of coding. Software development is about more than writing loops; it’s about problem-solving, architecting solutions, and understanding user needs. 

Ultimately, by focusing on what matters, developers can rapidly iterate on version updates or build new solutions to tackle the endless demand for software. So, if you’re a developer, embrace LLMs with a broad perspective. LLMs can free you from the drudge, but the short-term challenge will be more about how to integrate them into your workflows. 

Or, you can stay old school and stick with a world of hard coding and command lines. There will be a place for that for a few years yet. Just don’t think you are doing yourself or your organization any favors – application creation has always been about using software-based tools to get things done, and LLMs are no exception. 

Rest assured, we will always need engineers and problem solvers, even if the problems change. LLMs will continue to evolve – my money is on how multiple LLM-based agents can be put in sequence to check each other’s work,  test the outputs, or  create contention by offering alternative approaches to address a scenario. 

The future of software development promises to be faster-paced, more collaborative, and more innovative than ever. It will be fascinating, and our organizations will need help making the most of it all.

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>
5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires https://gigaom.com/2024/09/06/5-questions-carsten-brinkschulte-dryad/ Fri, 06 Sep 2024 15:21:03 +0000 https://gigaom.com/?p=1037636 I spoke recently with Carsten Brinkschulte, co-founder and CEO of Dryad. Here is some of our conversation on Silvanet and how it

The post 5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires appeared first on Gigaom.

]]>
I spoke recently with Carsten Brinkschulte, co-founder and CEO of Dryad. Here is some of our conversation on Silvanet and how it deals with the ever-growing global concern of forest fires.

Carsten, tell me a bit about yourself, Dryad, and your product, Silvanet.

I’ve been in telecoms for 25 years. I’ve had three startups and three exits in the space, in 4G network infrastructure, mobile email, instant messaging services, and device management. I started Dryad in 2020 with five co-founders. Dryad is what you’d call an “impact for profit” company. The mission is to be green, not just as a PR exercise. We want a positive environmental impact, but also a profit—then we can have more impact.

We introduced Silvanet in 2023 to focus on the ultra-early detection of wildfires because they have such a devastating environmental impact, particularly on global warming. Between six and eight billion tons of CO2 are emitted in wildfires across the world each year, which is 20% of global CO2 emissions.

Our mission is to reduce human induced wildfires. Arson, reckless behavior, accidents, and technical faults account for 80% of fires. We want to prevent biodiversity loss and prevent CO2 emissions, but also address economic loss because fires cause huge amounts of damage. The low end of the figures is about $150 billion, but that figure can go up to $800 billion a year, depending on how you look at the statistics.

What is your solution?

Silvanet is an end-to-end solution—sensors, network infrastructure, and a cloud platform. We’ve developed a solar powered gas sensor that we embed in the forest: you can hang it on a tree. It is like an electronic nose that can smell the fire. You don’t have to have an open flame: someone can throw a cigarette, then depending on wind and other parameters, a close-by sensor should be able to detect it within 30-60 minutes.

We’re running embedded AI on the edge in the sensor, to distinguish between the smells that the sensor is exposed to. When the sensor detects a fire, it will send an alert.

Sensors are solar powered. The solar panels are quite small but big enough to power the electronics via a supercapacitor for energy storage. It doesn’t have as much energy density as a battery, but it doesn’t have the downside. Lithium ion would be a silly idea because it can self-ignite. We didn’t want to bring a fire starter to the forest.

Obviously, you don’t get much direct sunlight under the trees, but the supercapacitors work well in low temperatures and have no limitations with regards to recharge cycles. The whole setup is highly efficient. We take care to not use excess energy.

Next, since we are in the middle of a forest, we typically don’t have 4G or other connectivity, so Silvanet works as an IoT mesh network. We’re using LoRaWan for the communications, which is like Wi-Fi but lower power and longer range—it can communicate over a few kilometers. We’ve added the mesh topology because LoRaWan doesn’t have mesh. Nobody else has done this as far as we are aware.

The mesh enables us to cover large areas without any power nearby! Sensors communicate from deep in the forest, over the mesh to a border gateway. Then a cloud platform captures the data, analyzes it further, and sends out alerts to firefighters.

What does deployment look like?

Deployment density depends on the customer. You typically have irregular deployments where you focus on high risk, high value areas. In remote locations, we put less sensors, but in areas like along a road highway, walking paths, power lines, and train lines, where most of the fires are starting, we put many more.

Humans don’t start fires in the middle of the forest. They’ll be along hiking paths where people throw a cigarette, or a campfire grows out of control or is not properly extinguished. For the rest, you could have a lightning-induced fire, or a power line where a tree falls onto it, or a train sparks, causing a grass fire that turns into a bush fire and then a wildfire.

You end up with variable density. You need one sensor per hectare, roughly three acres, for a fast detection time, then one sensor for five hectares overall.

Other solutions include optical satellite systems, which look down from space to detect fires with infrared cameras, or cameras on the ground that can see smoke plumes rising above the trees. All these systems make sense. Satellites are invaluable for seeing where big fires are heading, but they’re late in the game when it comes to detection. Cameras are good as well because they are closer to the action.

The fastest is arguably the electronic sensors, but they can’t be everywhere. So, ideally you would deploy all three systems. Cameras have a greater overview, and satellites have the biggest picture. You can focus sensor systems on areas of high risk, high value—like in the interface, where you have got people causing fires but also are affected by fires.

Do you have an example?

We have a pilot deployment in Lebanon. The deployment was high density because it’s what’s called a wild-urban interface—there are people living in villages, some farming activity, and forests. It’s of the highest risk and highest value because if there is a fire, there’s a good chance that it spreads and becomes a conflagration—then you have a catastrophe.

Within the pilot, we detected a small fire within about 30 minutes. Initially, the AI in the sensor calculated from the gas scans, a 30% probability of it being a fire. The wind may have changed as the probability went down, then about 30 minutes later it sensed more smoke and “decided” it was really a fire.

How’s business looking?

We try to keep pricing as low as possible—despite being manufactured in Germany, we’re less than €100 a sensor. We have a service fee for operating the cloud, charged on an annual basis, but that’s also low cost.

Last year, we sold 20,000 sensors worldwide. We now have 50 installations in southern Europe–in Greece, Spain, and Portugal–and in the US in California, in Canada, in Chile, and as far as South Korea. We have a deployment in the UK, with the National Trust. We’ve also three or four forests in Germany, in Brandenburg, which is very fire prone and dry as a tinderbox.

This year, we’re expecting more than 100,000 sensors to be shipped. We’re ramping up manufacturing to allow for that volume. We’re properly funded with venture capital—we just raised another 5.6 million in the middle of March to fuel the growth we’re seeing.

The vision is to go beyond fire: once a network is installed in the forest, you can do much more. We’re starting to work on additional sensors, like a fuel moisture sensor that can measure fire risk by measuring moisture in the fuel that’s on the ground, a dendron meter that measures tree growth, and a chainsaw detection device to detect illegal logging.

The post 5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires appeared first on Gigaom.

]]>
GigaOm Survey Report: Delivering Application Performance in a Hybrid World https://gigaom.com/report/gigaom-survey-report-delivering-application-performance-in-a-hybrid-world/ Wed, 28 Aug 2024 19:38:56 +0000 https://gigaom.com/?post_type=go-report&p=1034911/ This GigaOm survey, of 352 senior and technical decision makers across North America and Western Europe, assessed architectures, challenges, and approaches to

The post GigaOm Survey Report: Delivering Application Performance in a Hybrid World appeared first on Gigaom.

]]>
This GigaOm survey, of 352 senior and technical decision makers across North America and Western Europe, assessed architectures, challenges, and approaches to building and managing performant applications. The survey was commissioned by SolarWinds, following a similar survey conducted two years ago. This has enabled comparisons to be drawn between evolving behaviors, challenges, and responses.

Key findings are:

  • There is an imbalance between strategy and reality for cloud-based versus hybrid approaches. Whereas only 43% of organizations favor a hybrid strategy for their cloud applications, 56% have a hybrid application architecture. 70% of respondents saw customer experience as a primary driver for cloud-first. Only 50% of respondents saw the lower cost of delivery as a primary driver, suggesting a move beyond saving money as a primary criterion.
  • Application complexity is the biggest operational challenge organizations face, according to 51% of the overall sample. This is driving organizations that would prefer a cloud-based approach towards unplanned hybrid models.
  • Looking at operational management and observability, real-time performance measurement is the highest priority operational capability for 64% of respondents. We can also see the role of large language models (LLMs) and artificial intelligence (AI) to aid operational management.
  • Drilling into features, existing tooling is making a difference: identifying performance improvements is the number one benefit for 64%. Most in need of improvement are higher-order features such as traces and business/retail metrics.
  • For organizations struggling with their cloud-first aspirations, we learn from more advanced organizations regarding DevOps adoption and success in adopting cloud-based models.
  • We found 60% of organizations with limited DevOps experience face complexity challenges to operations, compared with 46% that are optimizing their DevOps use. Similarly, 51% of those with limited DevOps experience struggle to build a picture of performance, compared with just 41% of the more advanced group. This can be associated with skills investment. 46% of the limited DevOps group say they lack operational skills, compared with 30% of the optimizing group.
  • Similarly, 81% of the cloud-native group prioritize a real-time view of performance, compared to 60% working in legacy/virtualized environments. Meanwhile, 60% of cloud-first and 58% of cloud-native respondents favor a complete picture of performance across apps and infrastructure, compared with 48% of hybrid and 44% of legacy/virtualized groups.
  • Some 65% of cloud-native respondents considered linking application performance to business outcomes important. Cloud-native organizations prioritize the business, a lesson all organizations should learn.

From the research overall, we see how performance management tools are being prioritized to address the complexity challenge and deliver on their observability goals. More advanced organizations prioritize an integrated, holistic view of application performance, drawing on measures from the top to the bottom of the stack.

To avoid creating unnecessary complexity by getting stuck in a halfway-hybrid house, we recommend taking such steps in advance. This means building skills around cloud-based and DevOps approaches, such that both become a viable destination, rather than being trapped in an unplanned hybrid state.

The post GigaOm Survey Report: Delivering Application Performance in a Hybrid World appeared first on Gigaom.

]]>