-
Institute of Computing Technology, Chinese Academy of Sciences
- Beijing, China
-
07:32
(UTC +08:00) - https://www.ginnux.top/
- @Gin_HUST
Highlights
- Pro
Lists (8)
Sort Name ascending (A-Z)
Stars
Learning Large Language Model (LLM)(大语言模型学习)
[Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machi…
《代码随想录》LeetCode 刷题攻略:200道经典题目刷题顺序,共60w字的详细图解,视频难点剖析,50余张思维导图,支持C++,Java,Python,Go,JavaScript等多语言版本,从此算法学习不再迷茫!🔥🔥 来看看,你会发现相见恨晚!🚀
NuwaTS: a Foundation Model Mending Every Incomplete Time Series
PyTorch implementation of "ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data" (AAAI 2025 [oral])
A Fair and Scalable Time Series Forecasting Benchmark and Toolkit.
The collection of resources about LLM for Time series tasks
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
LangChain 的中文入门教程
Image Captioning Vision Transformers (ViTs) are transformer models that generate descriptive captions for images by combining the power of Transformers and computer vision. It leverages state-of-th…
Transformer-based image captioning extension for pytorch/fairseq
Tensorflow/Keras implementation of an image captioning neural network, using CNN and RNN
Image Captioning using CNN+RNN Encoder-Decoder Architecture in PyTorch
Model architecture: CNN encoder and RNN decoder
Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
综合型hexo主题:博客+知识库+专栏+笔记,内置海量的标签组件和动态数据组件。
Implementation of Vision Transformer from scratch and performance compared to standard CNNs (ResNets) and pre-trained ViT on CIFAR10 and CIFAR100.
Using mmflow as a third party lib to run evaluation.