Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install -r requirements-dev.txt
pip install hidet
pip install .
- name: Test with pytest
run: |
Expand Down
43 changes: 0 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,6 @@
![](https://github.com/CentML/centml-python-client/actions/workflows/unit_tests.yml/badge.svg)

### Installation
First, ensure you meet the requirements for [Hidet](https://github.com/hidet-org/hidet), namely:
- CUDA Toolkit 11.6+
- Python 3.8+

To install without cloning, run the following command:
```bash
Expand Down Expand Up @@ -36,46 +33,6 @@ source scripts/completions/completion.<shell language>
Shell language can be: bash, zsh, fish
(Hint: add `source /path/to/completions/completion.<shell language>` to your `~/.bashrc`, `~/.zshrc` or `~/.config/fish/completions/centml.fish`)

### Compilation

centml-python-client's compiler feature allows you to compile your ML model remotely using the [hidet](https://hidet.org/docs/stable/index.html) backend. \
Thus, use the compilation feature, make sure to run:
```bash
pip install hidet
```

To run the server locally, you can use the following CLI command:
```bash
centml server
```
By default, the server will run at the URL `http://0.0.0.0:8090`. \
You can change this by setting the environment variable `CENTML_SERVER_URL`


Then, within your python script include the following:
```python
import torch
# This will import the "centml" torch.compile backend
import centml.compiler

# Define these yourself
model = ...
inputs = ...

# Pass the "centml" backend
compiled_model = torch.compile(model, backend="centml")
# Since torch.compile is JIT, compilation is only triggered when you first call the model
output = compiled_model(inputs)
```
Note that the centml backend compiler is non-blocking. This means it that until the server returns the compiled model, your python script will use the uncompiled model to generate the output.

Again, make sure your script's environment sets `CENTML_SERVER_URL` to communicate with the desired server.

To see logs, add this to your script before triggering compilation:
```python
logging.basicConfig(level=logging.INFO)
```

### Tests
To run tests, first install required packages:
```bash
Expand Down
2 changes: 1 addition & 1 deletion centml/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
from centml.compiler.main import compile

7 changes: 0 additions & 7 deletions centml/cli/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,6 @@ def cli():
cli.add_command(logout)


@cli.command(help="Start remote compilation server")
def server():
from centml.compiler.server import run

run()


@click.group(help="CentML cluster CLI tool")
def ccluster():
pass
Expand Down
3 changes: 0 additions & 3 deletions centml/compiler/__init__.py

This file was deleted.

194 changes: 0 additions & 194 deletions centml/compiler/backend.py

This file was deleted.

45 changes: 0 additions & 45 deletions centml/compiler/config.py

This file was deleted.

57 changes: 0 additions & 57 deletions centml/compiler/main.py

This file was deleted.

Loading
Loading