Skip to content

Commit

Permalink
[Docs] Update README and KafkaDataset Document. (DeepRec-AI#265)
Browse files Browse the repository at this point in the history
  • Loading branch information
liutongxuan authored Jun 13, 2022
1 parent fc97159 commit 6f30f8a
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 7 deletions.
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,16 @@ DeepRec has super large-scale distributed training capability, supporting model
- Multi-tier Hybrid Embedding Storage
#### **Performance Optimization**
- Distributed Training Framework Optimization, such as grpc+seastar, FuseRecv, StarServer, HybridBackend etc.
- Runtime Optimization, such as CPU memory allocator (PRMalloc), GPU memory allocator etc.
- Runtime Optimization, such as CPU memory allocator (PRMalloc), GPU memory allocator, Cost based and critical path first Executor etc.
- Operator level optimization, such as BF16 mixed precision optimization, sparse operator optimization and EmbeddingVariable on PMEM and GPU, new hardware feature enabling, etc.
- Graph level optimization, such as AutoGraphFusion, SmartStage, AutoPipeline, StrutureFeature, MicroBatch etc.
- Compilation optimization, support BladeDISC, XLA etc.
#### **Deploy and Serving**
- Incremental model loading and exporting
- Super-scale sparse model distributed serving
- Multi-tier hybrid storage and multi backend supported
- Online deep learning with low latency
- Incremental model loading and exporting.
- Super-scale sparse model distributed serving.
- Multi-tier hybrid storage and multi backend supported.
- Online deep learning with low latency.
- High performance processor with SessionGroup supported.


***
Expand Down
4 changes: 2 additions & 2 deletions docs/KafkaDataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ class KafkaDataset(dataset_ops.Dataset):
import tensorflow as tf
from tensorflow.python.data.ops import iterator_ops

kafka_dataset = tf.data.KafkaDataset(topics=["dewu_1_partition:0:0:-1"],
kafka_dataset = tf.data.KafkaDataset(topics=["test_1_partition:0:0:-1"],
group="test_group1",
timeout=100,
eof=False)
Expand Down Expand Up @@ -146,4 +146,4 @@ with tf.Session() as sess:
for _ in range(100):
x = sess.run(next_elements, feed_dict={handle: train_handle})
print(x)
```
```

0 comments on commit 6f30f8a

Please sign in to comment.