Verda positions itself as an AI cloud infrastructure company built for teams that need GPU capacity for training and inference without long procurement cycles. The company, headquartered in Helsinki and previously known as Datacrunch, argues that the way compute gets bought is changing. In its view, cloud capacity is moving away from bespoke enterprise negotiation and toward instant, self service access that developers, organisations and increasingly AI driven workflows can consume on demand.

That pitch landed with investors. Verda announced a funding round of $117 million, described elsewhere as €100 million in combined debt and equity. Lifeline Ventures led the equity portion, with participation from byFounders, Tesi, Varma and other investors. A group of Nordic financial institutions provided the debt financing. The company said it will use the capital to accelerate product development and expand into new markets across Europe, the US and Asia.

The timing matters. Demand for sovereign or at least non US cloud options has risen in Europe, especially as AI workloads intensify the need for high performance compute. Verda sits directly in that gap. It does not just sell servers or colocation. It wants to operate as a vertically integrated AI cloud provider with control from physical infrastructure up to the application layer. That ambition puts it in a category far larger than a typical GPU rental startup and much closer, at least in narrative, to a regional hyperscaler built for AI first workloads.

What Verda is

Verda is an AI cloud platform for developers and organisations that need access to compute for model training and inference. The company says it offers self service instances and clusters, serverless containers and managed inference endpoints. The core promise is straightforward. Users should be able to get GPU capacity quickly, with transparent access and without the friction that often surrounds enterprise cloud procurement.

That positioning responds to a practical bottleneck in the AI market. High end compute remains scarce, expensive and unevenly distributed. Access often depends on long sales cycles, capacity reservations and account management processes that favour large buyers. Verda frames this as a structural mismatch between how AI teams work and how cloud infrastructure is still sold. It wants to replace negotiation heavy allocation with a platform model where compute behaves more like on demand offtake.

The company also ties its product strategy to a broader shift in software buying. Founder and CEO Ruben Bryon argues that the buyer on the other side may no longer be only a person. Parts of procurement, provisioning and optimisation may increasingly run through automated flows and AI agents. If that happens, cloud platforms that still depend on manual sales interaction and opaque pricing will face pressure. Verda is building around that assumption.

Who is behind Verda

Verda is led by Ruben Bryon, its founder and CEO. The company is headquartered in Helsinki. It rebranded from Datacrunch to Verda as it pushed beyond being perceived as a narrower compute provider and toward a broader AI cloud infrastructure company.

Its investor base combines venture capital, institutional capital and debt. Lifeline Ventures led the announced funding round. byFounders, Tesi and Varma also participated, alongside unnamed additional investors. Nordic financial institutions supplied the debt component. That structure is notable. Infrastructure heavy AI businesses often need both equity for growth and debt for asset intensive expansion. Verda appears to be using that blended model as it scales capacity, enters new markets and develops larger sites.

The company has also reached a scale that gives some operational context to its claims. It says it employs more than 100 people and plans to hire another 100 plus by the end of the year. Those roles will focus on engineering, AI Lab and go to market functions. Bryon has highlighted hardware and AI research as priority areas. That hiring plan suggests Verda sees product differentiation not only in infrastructure deployment, but also in tighter customer facing technical support and research driven product development.

What Verda does in practice

Verda runs an AI cloud stack that spans physical infrastructure and customer facing services. According to the available information, it already operates data centres in Finland and Iceland. It currently leases data centre space while also working on construction in Finland and preparing expansion of construction to Sweden for larger sites. That matters because capacity strategy is central to whether an AI cloud provider can compete on performance, availability and economics.

The company describes itself as vertically integrated. In practical terms, that means it does not just resell third party infrastructure. It manages more of the stack, from the underlying hardware environment to the services customers actually use. For AI buyers, vertical integration can improve control over provisioning, pricing and workload optimisation. It can also create tighter feedback loops between infrastructure operations and product design.

Verda says its cloud runs on 100 percent renewable energy. That point serves two functions. First, it helps position the company in regions where power mix and energy costs shape data centre strategy. Second, it supports procurement narratives for customers under pressure to account for the environmental profile of AI workloads. The claim does not by itself distinguish Verda from every competitor, but in infrastructure markets, power source and site location are now part of the commercial argument.

The company also points to ecosystem credibility markers. It says it is one of a select group of NVIDIA Preferred Partners globally. It also names customers such as 1X, Nokia, ExpressVPN and Freepik. Those names span robotics, telecom, privacy technology and creative tooling. That mix suggests Verda is not targeting a single vertical. It wants to serve a broad class of AI intensive users that need flexible compute access rather than deep lock in to one public cloud environment.

Why Verda emphasizes frictionless GPU access

The strongest through line in Verda’s messaging is not just European identity. It is frictionless access to compute. Bryon has framed the original problem in simple terms. Teams should not have to talk to someone just to get machine learning compute. That sounds obvious, but much of the high end GPU market still works that way, especially when buyers want clustered infrastructure, large reservations or custom terms.

Verda’s answer is to make AI compute purchasable through a self service model. The company connects that model to two trends. One is the rise of inference heavy workloads that need flexible and scalable deployment patterns. The other is the emergence of agentic purchasing and automated operational flows. If infrastructure demand increasingly comes from systems that trigger capacity based on usage, latency or cost thresholds, then procurement friction becomes a technical problem, not just a sales problem.

This is also where Verda’s infrastructure strategy intersects with product design. Self service instances address developers who need direct access. Clusters target larger training jobs. Serverless containers fit bursty or application oriented execution. Managed inference endpoints aim at teams that want to operationalise models without assembling the serving layer themselves. Together, these pieces suggest Verda is trying to cover the path from experimentation to deployment inside one cloud environment.

The European angle and its limits

Verda clearly benefits from Europe’s push for tech sovereignty. The region has long depended on US hyperscalers such as AWS, Microsoft and Google for cloud capacity. In AI, that dependence grows sharper because access to advanced compute shapes who can train, tune and deploy models at scale. A provider based in Europe can therefore market itself not only on location, but also on legal posture, ownership structure and strategic alignment with customers that want alternatives.

Bryon pushes that point further. He has argued that European neutrality can serve as a global differentiator against US centric providers. He also says Verda does not intend to shift its headquarters to the US or Delaware, unlike European startups that eventually align more closely with American capital and corporate structures. That is a deliberate political and commercial signal.

Still, neutrality is not a substitute for scale. The structural tension is clear. Customers may want sovereign or neutral compute, but they also want low prices, broad regional capacity, high availability and access to the latest hardware. US hyperscalers dominate because they combine those features with massive capital expenditure and global operating leverage. Verda’s challenge is to convert sovereignty demand into a durable infrastructure business before larger competitors adapt their local offerings or undercut on economics.

Funding, growth and financial signals

The company’s latest funding announcement provides several useful signals. Verda says the new capital will support product development and expansion into the UK, the US and Asia. It already has an office in London and plans additional offices in California and Asia. That is an ambitious footprint for a company that also wants to preserve a European identity.

It also reports that operations are cash flow positive and that its revenue run rate doubled to over $60 million in the first quarter of 2026. In a market where many AI infrastructure companies burn heavily to secure hardware and customers, that cash flow claim stands out. At the same time, the company told Sifted that the capital raised last year only lasted until January because of a surge in demand for compute. That detail cuts both ways. It signals strong demand. It also underlines how capital intensive this market remains, even for companies with revenue momentum.

Verda says it has raised around €170 million in total funding since inception. For an AI cloud company, that is substantial but not dominant. It can fund meaningful expansion, especially if blended with debt and if construction and leasing are paced carefully. It does not put Verda in the spending class of hyperscalers. The company therefore needs to stay selective in geography, customer mix and product scope.

Where Verda wants to go

Verda’s stated destination is larger than a regional GPU cloud. It wants to build the next generation of AI cloud infrastructure. In operational terms, that means expanding data centre capacity, entering new markets, hiring aggressively and refining a platform that supports instant provisioning and on demand consumption across training and inference workloads.

Its near term roadmap is visible in three tracks.

  • Geographic expansion through launches in the UK, the US and Asia, supported by offices in London and planned presences in California and Asia.
  • Infrastructure expansion through continued operations in Finland and Iceland, construction work in Finland and planned expansion into larger sites in Sweden.
  • Product expansion through continued development of self service compute, serverless containers, managed inference endpoints and AI Lab driven product decisions.

The AI Lab piece deserves attention. Bryon says the team works directly with customers and uses those insights to shape product decisions. That suggests Verda does not want to compete only on raw capacity. It also wants to improve fit for concrete AI use cases and shorten the distance between workload requirements and infrastructure features. In a crowded market, that customer proximity can matter as much as hardware access.

The strategic question around Verda

The central question is whether Verda can turn a timely position into a durable category. The timing is favourable. Demand for AI compute remains high. European buyers want alternatives. Inference workloads are broadening beyond model labs into product teams and enterprises. Buyers increasingly care about procurement speed, power sourcing and infrastructure flexibility.

But the market is unforgiving. GPU supply remains constrained. Site development takes time. Debt financed expansion adds discipline but also exposure. Large cloud providers can absorb lower margins, broaden service bundles and localise offerings when competitive pressure rises. Meanwhile, smaller specialised providers often compete for the same customers by promising lower friction and faster access.

Verda’s advantage lies in coherence. Its story aligns product architecture, geography and market thesis. It says compute should be easier to buy. It builds self service AI cloud services around that assumption. It roots the business in Europe without limiting expansion ambitions to Europe. It combines infrastructure control with application layer services. That does not guarantee scale, but it gives the company a sharper identity than many generic GPU marketplaces.

What to watch next

Three indicators will show whether Verda can move from strong positioning to durable infrastructure relevance.

  • Capacity execution. New sites and expanded construction in Finland and Sweden will reveal whether Verda can secure enough infrastructure to meet demand without eroding service quality or margins.
  • International traction. Expansion into the UK, the US and Asia will test whether its European neutrality is a local selling point or a broader global differentiator.
  • Product depth. The adoption of managed inference endpoints, serverless services and AI Lab informed features will show whether Verda can evolve beyond raw GPU access into a fuller AI cloud platform.

Verda is not just selling compute hours. It is making a claim about how AI infrastructure should be bought and operated. If that claim holds, the company could become one of the more consequential European players in AI cloud. If it does not, sovereignty branding alone will not close the gap with better capitalised rivals.

The bottom line

Verda is a Helsinki based AI cloud company that wants to reduce friction in how teams buy and use GPU compute. Ruben Bryon leads the company. Lifeline Ventures, byFounders, Tesi, Varma and Nordic lenders back it. The platform offers self service instances and clusters, serverless containers and managed inference endpoints. It operates in Finland and Iceland, builds out capacity in the Nordics and plans expansion into the UK, the US and Asia.

The larger bet is clear. Verda wants to become a European rooted AI cloud provider with enough product depth, capacity control and international reach to matter in a market still dominated by US hyperscalers. Its funding, customer roster and revenue momentum suggest that bet has early traction. The harder part starts now. In AI infrastructure, narrative opens doors. Execution decides who stays in the room.