Decentralizing LLMDecentralizing LLMOpen Sourcing LLMInference Compute
Decentralizing LLM
Decentralizing LLM
Open Sourcing LLM
Inference Compute
Infera unifies global inference compute into one integrated network
Utilizing existing infrastructure to create cheaper inference networks
A\
Infera Node Network
We let anybody utilize their idle GPUs to become node runners and earn commission fees as tokens.
B\
Inference API
Our Inference API allows for developers to access a vast library of open source models.
C\
Infera Token
Nodes perform inference work in exchange for INFER token payments.
Harnessing the world's hidden GPU power with decentralized AI network
We transform idle GPUs into a global AI network, democratizing access and pioneering a decentralized digital future.
1B+
Idle gpus worldwide ready to be harnessed
10x
potential increase in global ai compute capacity
90%
of ai models rely on gpu-powered inference
Becoming a node runner is as easy — be earning rewards in just five minutes
A\
1. Install the Infera App
Install our desktop app and web extension to turn your computer into an Infera node to contribute compute.
B\
2. Run our Node
When you toggle the app, your computer will start performing tasks in the background.
C\
3. Start earning
Using our desktop app or extension, you will be able to claim your INFER token rewards.
Start earning rewards — node runners receive INFER for their inference work
Using our compute network is simple — just make a regular API request
You are an expert programmer that writes simple, concise code and explanations. Write a python function that accepts a long input string, extracts words in between '<span>' and '</span>' and returns a list.