File size: 2,869 Bytes
94e735e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import streamlit as st
from streamlit_extras.switch_page_button import switch_page

st.title("LLaVA-NeXT")

st.success("""[Original tweet](https://twitter.com/mervenoyann/status/1770832875551682563) (March 21, 2024)""", icon="ℹ️")
st.markdown(""" """)

st.markdown("""LLaVA-NeXT is recently merged to πŸ€— Transformers and it outperforms many of the proprietary models like Gemini on various benchmarks!🀩   
For those who don't know LLaVA, it's a language model that can take image πŸ’¬  
Let's take a look, demo and more in this. 
""")
st.markdown(""" """)

st.image("pages/LLaVA-NeXT/image_1.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown("""
LLaVA is essentially a vision-language model that consists of ViT-based CLIP encoder, a MLP projection and Vicuna as decoder ✨  
LLaVA 1.5 was released with Vicuna, but LLaVA NeXT (1.6) is released with four different LLMs:  
- Nous-Hermes-Yi-34B  
- Mistral-7B  
- Vicuna 7B & 13B 
""")
st.markdown(""" """)

st.image("pages/LLaVA-NeXT/image_2.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown("""
Thanks to Transformers integration, it is very easy to use LLaVA NeXT, not only standalone but also with 4-bit loading and Flash Attention 2 πŸ’œ  
See below on standalone usage πŸ‘‡ 
""")
st.markdown(""" """)

st.image("pages/LLaVA-NeXT/image_3.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown("""To fit large models and make it even faster and memory efficient, you can enable Flash Attention 2 and load model into 4-bit using bitsandbytes ⚑️ transformers makes it very easy to do this! See below πŸ‘‡ 
""")
st.markdown(""" """)

st.image("pages/LLaVA-NeXT/image_4.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown("""If you want to try the code right away, here's the [notebook](https://t.co/NvoxvY9z1u).  
Lastly, you can directly play with the LLaVA-NeXT based on Mistral-7B through the demo [here](https://t.co/JTDlqMUwEh) πŸ€— 
""")
st.markdown(""" """)

st.video("pages/LLaVA-NeXT/video_1.mp4", format="video/mp4")
st.markdown(""" """)

st.info("""
Ressources:  
[LLaVA-NeXT: Improved reasoning, OCR, and world knowledge](https://llava-vl.github.io/blog/2024-01-30-llava-next/) 
by Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, Yong Jae Lee (2024)   
[GitHub](https://github.com/haotian-liu/LLaVA/tree/main)   
[Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/llava_next)""", icon="πŸ“š")


st.markdown(""" """)
st.markdown(""" """)
st.markdown(""" """)
col1, col2, col3 = st.columns(3)
with col1:
    if st.button('Previous paper', use_container_width=True):
        switch_page("Depth Anything")
with col2:
    if st.button('Home', use_container_width=True):
        switch_page("Home")
with col3:
    if st.button('Next paper', use_container_width=True):
        switch_page("Painter")