Logo

Private Federated Learning
Framework for Large
Language Models

Empowering privacy-preserving LLM training with
secure multi-party computation.

Image

Acceptance to Flower AI Summit 2025

PetalGuard Core Features

Image

GPU-Based
Acceleration

Leverages GPU Acceleration for a Fast Training

Image

Usability

Users can easily configure parameters to optimize performance and security

Image

Client Dropout
Handling

Ensures correct aggregation in the event of dropouts

Image

Extended gRPC Support for Large Models

Enables the communications of models exceeding the 2GB limit set by gRPC

Image

Cryptographically Guaranteed Privacy

Employs secure multi-party computation techniques for secure aggregation

Image

Distributed
Aggregation

Multiple servers collectively aggregate as opposed to a central aggregator

Latest Blogs

Blog Image
How to Redefine AI Security with the PetalGuard Approach Arrow

In 2023, genetic testing giant 23andMe suffered a massive breach,...

May 27, 2025
Blog Image
Rethinking Secure AI: How PetalGuard Sets a New Benchmark for Federated Learning Arrow

Business leaders are wary. They urgently want to reap the...

May 27, 2025
get in touch

Contact
Our Team

This field is required.
This field is required and must be a valid email.
This field is required.

PetalGuard Roadmap

Icon
gRPC support for large models
Icon
Distributed Secure Aggregation
Icon
YAML-based configuration support
Icon
Client Dropouts Handling
Icon
Initial Fork Flower v1.5
Icon
Verifiable Federated Learning
Icon
Inference attack prevention
Icon
Model Poisoning protection

Join the Community!

Join us on our journey to make federated
approaches available to everyone.

Image