Histórico de Commits

Autor SHA1 Mensagem Data
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) há 7 meses atrás
  AlpinDale 405bb74612 Control plane comms refactor (#573) há 7 meses atrás
  AlpinDale 323fe23b21 chore: use 127.0.0.1 for single-node setups há 7 meses atrás
  AlpinDale dfa59bc5f9 fix: 16 GPUs in a cluster há 7 meses atrás
  AlpinDale 17eb1b7eb9 chore: remove ray health check há 7 meses atrás
  AlpinDale de62ceb18c refactor: eliminate parallel worker per-step task scheduling overhead há 7 meses atrás
  AlpinDale 9f3d6205ce fix ray gpu executor há 7 meses atrás
  AlpinDale 236be273e5 feat: tensor parallel speculative decoding (#554) há 7 meses atrás
  AlpinDale c6a501f682 add multiprocessing executor; make ray optional há 7 meses atrás
  AlpinDale ef733aee43 implement ExecuteModelData to reduce executor complexity há 7 meses atrás
  AlpinDale 7bcf4c3fc9 centralize gpu worker construction há 7 meses atrás
  AlpinDale fb982981ce num_lookahead_slots in neuron and ray executors há 7 meses atrás
  AlpinDale 957ed7d244 type hints há 7 meses atrás
  AlpinDale c21af7acad feat: `DistributedGPUExecutor` abstract class (#541) há 8 meses atrás
  AlpinDale 199e776722 chore: move ray utils to executor dir há 8 meses atrás
  AlpinDale 46159b107a formatting: pt1 há 8 meses atrás
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) há 8 meses atrás
  AlpinDale f894f7b176 Revert "reduce dedupe by wrapping in general worker class" há 9 meses atrás
  AlpinDale 082b0b03bc Revert "actually run the workers" há 9 meses atrás
  AlpinDale 36cf32649d actually run the workers há 9 meses atrás
  AlpinDale 9fff6fb507 reduce dedupe by wrapping in general worker class há 9 meses atrás
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) há 10 meses atrás
  AlpinDale 0f6d56b07f feat: model executor refactor (#367) há 11 meses atrás
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) há 11 meses atrás