A PyTorch-based recommendation system framework for production-ready deep learning models
TorchEasyRec implements state-of-the-art deep learning models for recommendation tasks: candidate generation (matching), scoring (ranking), multi-task learning, and generative recommendation. It enables efficient development of high-performance models through simple configuration and easy customization.
- MaxCompute/ODPS - Native Alibaba Cloud data warehouse integration
- Parquet - High-performance columnar file format when using Local | OSS | NAS storage, with built-in auto-rebalancing capabilities
- CSV - Standard tabular file format
- Streaming - Kafka message queue integration, also compatible with Alibaba Datahub
- Checkpointable - Resume training from exact data position
- Distributed Training - Hybrid data/model parallelism via TorchRec
- Large Embeddings - Row-wise, column-wise, table-wise sharding
- Zero-Collision Hash - Large scale Dynamic embedding with eviction policies (LFU/LRU)
- Mixed Precision - FP16/BF16 training support
- Run Everywhere - Local, PAI-DLC, PAI-DSW
- Feature Generation - Consistent FG between training and serving
- EAS Deployment - Auto-scaling model serving on Alibaba Cloud
- TensorRT/AOTInductor - Model acceleration for inference
- 20+ Models - Battle-tested algorithms powering real-world recommendation: DSSM, TDM, DeepFM, DIN, MMoE, PLE, PEPNet, DLRM-HSTU and more
- 10+ Feature Types - IdFeature, RawFeature, ComboFeature, LookupFeature, ExprFeature, SequenceFeature, CustomFeature, and more
- Custom Model - Easy to implement customized models
- Custom Feature - Easy to implement customized features
| Model | Description |
|---|---|
| DSSM | Two-tower deep semantic matching model |
| MIND | Multi-interest network with dynamic routing |
| TDM | Tree-based deep model for large-scale retrieval |
| DAT | Dual augmented two-tower model |
| Model | Description |
|---|---|
| DeepFM | Factorization-machine based neural network |
| WideAndDeep | Wide & Deep learning for recommendations |
| MultiTower | Flexible multi-tower architecture |
| DIN | Deep Interest Network with attention mechanism |
| DLRM | Deep Learning Recommendation Model |
| DCN | Deep & Cross Network |
| DCN-V2 | Improved Deep & Cross Network |
| MaskNet | Instance-guided mask for feature interaction |
| xDeepFM | Compressed interaction network |
| WuKong | Dense scaling with high-order interactions |
| RocketLaunching | Knowledge distillation framework |
| Model | Description |
|---|---|
| MMoE | Multi-gate Mixture-of-Experts |
| PLE | Progressive Layered Extraction |
| DBMTL | Deep Bayesian Multi-task Learning |
| PEPNet | Personalized Embedding and Parameter Network |
| Model | Description |
|---|---|
| DLRM-HSTU | Hierarchical Sequential Transduction Units |
Get started with TorchEasyRec in minutes:
| Tutorial | Description |
|---|---|
| Local Training | Train models on your local machine or single server |
| PAI-DLC Training | Distributed training on Alibaba Cloud PAI-DLC |
| PAI-DLC + MaxCompute Table | Train with MaxCompute (ODPS) tables on PAI-DLC |
For the complete documentation, please refer to https://torcheasyrec.readthedocs.io/
-
GitHub Issues - Report bugs or Request features
-
DingTalk Groups
-
If you have any questions about how to use TorchEasyRec, please join the DingTalk group and contact us.
-
If you have enterprise service needs or need to purchase Alibaba Cloud services to build a recommendation system, please join the DingTalk group to contact us.
Any contributions you make are greatly appreciated!
- Please report bugs by submitting an issue
- Please submit contributions using pull requests
- Please refer to the Development Guide for more details
If you use TorchEasyRec in your research, please cite:
@software{torcheasyrec2024,
title = {TorchEasyRec: An Easy-to-Use Framework for Recommendation},
author = {Alibaba PAI Team},
year = {2024},
url = {https://github.com/alibaba/TorchEasyRec}
}TorchEasyRec is released under Apache License 2.0. Please note that third-party libraries may not have the same license as TorchEasyRec.
