AlpinDale
|
baa1e81ac6
update gitignore
|
1 year ago |
AlpinDale
|
46c56a43c1
add loss functions :)
|
1 year ago |
AlpinDale
|
c60d9b8544
add initializer
|
1 year ago |
AlpinDale
|
419bd74155
add indentation
|
1 year ago |
AlpinDale
|
8b5425d3be
add gpt tokenizer
|
1 year ago |
AlpinDale
|
17eebd7332
add dominator headers
|
1 year ago |
AlpinDale
|
7fcef5d9ec
add graph structures
|
1 year ago |
AlpinDale
|
fc1d948b7f
add dot utils and parallel tensor libs
|
1 year ago |
AlpinDale
|
d805ca629c
add batching configuration
|
1 year ago |
AlpinDale
|
76fd42e472
add layer headers
|
1 year ago |
AlpinDale
|
54daef85d6
machine view headers
|
1 year ago |
AlpinDale
|
ba030f3b96
add hashmap and a basic graph
|
1 year ago |
AlpinDale
|
89635aabbb
add legion accessor for later
|
1 year ago |
AlpinDale
|
a93c9be917
add const definitions
|
1 year ago |
AlpinDale
|
27fcf6e902
feat: initial commit
|
1 year ago |
AlpinDale
|
2d6b679ef0
fix: logging in llama, and some docs
|
1 year ago |
AlpinDale
|
092476ed97
remove leftover positional arg for logging
|
1 year ago |
AlpinDale
|
98c4418114
fix: llama support
|
1 year ago |
AlpinDale
|
7adfb9d085
chore: refactor CUDA kernels to match AWQ's
|
1 year ago |
AlpinDale
|
bc6b574f37
fix: properly include layernorm kernels
|
1 year ago |
AlpinDale
|
5b23bb18d2
fix setup.py for awq
|
1 year ago |
AlpinDale
|
e8e89b7e30
fix: unnecessary indentation
|
1 year ago |
AlpinDale
|
95c7ad47b5
add llama support
|
1 year ago |
AlpinDale
|
7f72d4c892
move this stuff around lol
|
1 year ago |
AlpinDale
|
d35608f0e3
add to model loader
|
1 year ago |
AlpinDale
|
342b22a5cc
move classes around
|
1 year ago |
AlpinDale
|
b7cc678706
add configurations for quant models
|
1 year ago |
AlpinDale
|
81f29b9f67
add all cuda kernels
|
1 year ago |
AlpinDale
|
d01db60bb1
feat: initial layernorm kernels
|
1 year ago |
AlpinDale
|
8c2353e803
llama support for safetensors
|
1 year ago |