Commit History

Autor SHA1 Mensaxe Data
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) hai 7 meses
  AlpinDale 405bb74612 Control plane comms refactor (#573) hai 7 meses
  AlpinDale 323fe23b21 chore: use 127.0.0.1 for single-node setups hai 7 meses
  AlpinDale dfa59bc5f9 fix: 16 GPUs in a cluster hai 7 meses
  AlpinDale 17eb1b7eb9 chore: remove ray health check hai 7 meses
  AlpinDale de62ceb18c refactor: eliminate parallel worker per-step task scheduling overhead hai 7 meses
  AlpinDale 9f3d6205ce fix ray gpu executor hai 7 meses
  AlpinDale 236be273e5 feat: tensor parallel speculative decoding (#554) hai 7 meses
  AlpinDale c6a501f682 add multiprocessing executor; make ray optional hai 7 meses
  AlpinDale ef733aee43 implement ExecuteModelData to reduce executor complexity hai 7 meses
  AlpinDale 7bcf4c3fc9 centralize gpu worker construction hai 7 meses
  AlpinDale fb982981ce num_lookahead_slots in neuron and ray executors hai 7 meses
  AlpinDale 957ed7d244 type hints hai 7 meses
  AlpinDale c21af7acad feat: `DistributedGPUExecutor` abstract class (#541) hai 8 meses
  AlpinDale 199e776722 chore: move ray utils to executor dir hai 8 meses
  AlpinDale 46159b107a formatting: pt1 hai 8 meses
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) hai 8 meses
  AlpinDale f894f7b176 Revert "reduce dedupe by wrapping in general worker class" hai 9 meses
  AlpinDale 082b0b03bc Revert "actually run the workers" hai 9 meses
  AlpinDale 36cf32649d actually run the workers hai 9 meses
  AlpinDale 9fff6fb507 reduce dedupe by wrapping in general worker class hai 9 meses
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) hai 10 meses
  AlpinDale 0f6d56b07f feat: model executor refactor (#367) hai 11 meses
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) hai 11 meses