提交历史

作者 SHA1 备注 提交日期
  AlpinDale dfa59bc5f9 fix: 16 GPUs in a cluster 7 月之前
  AlpinDale 17eb1b7eb9 chore: remove ray health check 7 月之前
  AlpinDale de62ceb18c refactor: eliminate parallel worker per-step task scheduling overhead 7 月之前
  AlpinDale 9f3d6205ce fix ray gpu executor 7 月之前
  AlpinDale 236be273e5 feat: tensor parallel speculative decoding (#554) 7 月之前
  AlpinDale c6a501f682 add multiprocessing executor; make ray optional 7 月之前
  AlpinDale ef733aee43 implement ExecuteModelData to reduce executor complexity 7 月之前
  AlpinDale 7bcf4c3fc9 centralize gpu worker construction 7 月之前
  AlpinDale fb982981ce num_lookahead_slots in neuron and ray executors 7 月之前
  AlpinDale 957ed7d244 type hints 7 月之前
  AlpinDale c21af7acad feat: `DistributedGPUExecutor` abstract class (#541) 8 月之前
  AlpinDale 199e776722 chore: move ray utils to executor dir 8 月之前
  AlpinDale 46159b107a formatting: pt1 8 月之前
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) 8 月之前
  AlpinDale f894f7b176 Revert "reduce dedupe by wrapping in general worker class" 9 月之前
  AlpinDale 082b0b03bc Revert "actually run the workers" 9 月之前
  AlpinDale 36cf32649d actually run the workers 9 月之前
  AlpinDale 9fff6fb507 reduce dedupe by wrapping in general worker class 9 月之前
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) 10 月之前
  AlpinDale 0f6d56b07f feat: model executor refactor (#367) 11 月之前
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 11 月之前