# Inferix AI

From the information presented above, we can conclude that the hardware of the Inferix network is well-suited to serve as an infrastructure for federated AI. Next, we will discuss the design of the Inferix Federated AI system.

#### Figure 13: <a href="#fig_inferix_tensoropera_integrated_architecture" id="fig_inferix_tensoropera_integrated_architecture"></a>

<figure><img src="https://3032367557-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FcE7ARktdPinaZgrdSJOX%2Fuploads%2F4Lvbt9adT6w77q7YQkLL%2Finferix-decentralized-fed-ai-architecture.svg?alt=media&#x26;token=8683dc8d-e428-483a-ae5d-19c0a1365a9c" alt=""><figcaption><p>Inferix and TensorOpera integrated architecture</p></figcaption></figure>

In the [architectural design](#fig_inferix_tensoropera_integrated_architecture), Inferix enables *generative AI artists* and content creators to access AI models trained by *AI Builders* within the Inferix community, as well as models trained by the Inferix Team itself (built-in models). These services can be hosted on the Inferix Manager Node system or on the TensorOpera AI platform.

AI Builders have the option to run their models directly on the Inferix infrastructure or through the TensorOpera Bridge. The output result can be hosted in Inferix infra with custom domain option.

In addition to handling graphics rendering tasks, Inferix GPU Nodes also serve as Federated Learning Clients by running the Inferix TensorOpera CLI.

* *TensorOpera CLI:* the CLI client that is built based on TensorOpera open source with Inferix PoW algorithm integrated.
* *Inferix PoW:* general PoW algorithm used to calculate the actual work performed by workers, excluding those involved in rendering tasks. Inferix PoW is based on the Proof-of-Rendering mechanism to calculate the [Inferix Bench](https://docs.inferix.io/inferix-whitepaper/economic-model/inferix-bench-and-ibme/ib-and-ibm), incorporating an algorithm to accurately measure the actual working time of a node.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.inferix.io/inferix-whitepaper/decentralized-federated-ai/inferix-ai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
