File size: 7,515 Bytes
a9106b7
eae46f9
a9106b7
 
 
 
 
a2ab0c5
a9106b7
 
e92c1b5
afe62b5
 
 
e8abdc9
a9106b7
 
e8abdc9
2c1d998
e8abdc9
a9106b7
 
 
bb3b377
a2ab0c5
a9106b7
 
 
5f1f01a
a9106b7
cd7cf50
a9106b7
 
 
 
 
 
4aae838
002f3bc
 
a9106b7
4aae838
a9106b7
 
 
 
 
 
 
fa87815
a9106b7
 
 
 
 
 
cf713c5
a9106b7
 
 
fa87815
a9106b7
 
 
 
 
 
4bbd2d4
a9106b7
 
6adc607
 
 
 
 
 
 
a9106b7
 
2c1d998
a9106b7
 
2c1d998
3845d07
a9106b7
 
 
d4b2198
2c1d998
cd7cf50
 
 
 
 
 
 
2c1d998
d4b2198
6eebd42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4b2198
 
6eebd42
 
a9106b7
79b912b
4aae838
4bbd2d4
 
 
a9106b7
 
 
 
fa87815
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
import os
from collections.abc import Iterator
from threading import Thread

import gradio as gr
import spaces
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer

DESCRIPTION = """\
# SILMA Kashif 2B Instruct v1.0 Playground

This is a demo of [`silma-ai/SILMA-Kashif-2B-Instruct-v1.0`](https://huggingface.co/silma-ai/SILMA-Kashif-2B-Instruct-v1.0).

** NOTE: Kashif is a RAG-optimized model, it is only trained to answer questions based on context.
"""

MAX_MAX_NEW_TOKENS = 1048
DEFAULT_MAX_NEW_TOKENS = 600
MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "11000"))

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

model_id = "silma-ai/SILMA-Kashif-2B-Instruct-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
model.config.sliding_window = 12288
model.eval()


@spaces.GPU(duration=90)
def generate(
    message: str,
    chat_history: list[dict],
    max_new_tokens: int = 1024,
    temperature: float = 0.6,
) -> Iterator[str]:
    conversation = chat_history.copy()
    conversation.append({"role": "user", "content": message})

    input_ids = tokenizer.apply_chat_template(conversation, add_generation_prompt=True, return_tensors="pt")
    if input_ids.shape[1] > MAX_INPUT_TOKEN_LENGTH:
        input_ids = input_ids[:, -MAX_INPUT_TOKEN_LENGTH:]
        gr.Warning(f"Trimmed input from conversation as it was longer than {MAX_INPUT_TOKEN_LENGTH} tokens.")
    input_ids = input_ids.to(model.device)

    streamer = TextIteratorStreamer(tokenizer, timeout=20.0, skip_prompt=True, skip_special_tokens=True)
    generate_kwargs = dict(
        {"input_ids": input_ids},
        streamer=streamer,
        max_new_tokens=max_new_tokens,
        do_sample=True,
        temperature=temperature,
    )
    t = Thread(target=model.generate, kwargs=generate_kwargs)
    t.start()

    outputs = []
    for text in streamer:
        outputs.append(text)
        yield "".join(outputs)


demo = gr.ChatInterface(
    fn=generate,
    additional_inputs=[
        gr.Slider(
            label="Max new tokens",
            minimum=1,
            maximum=MAX_MAX_NEW_TOKENS,
            step=1,
            value=DEFAULT_MAX_NEW_TOKENS,
        ),
        gr.Slider(
            label="Temperature",
            minimum=0.01,
            maximum=4.0,
            step=0.1,
            value=0.01,
        )
    ],
    stop_btn=None,
    examples=[
        ["Hello"],
        ["""أجب على السؤال بناءً على السياق أدناه 

        السياق: تشمل الاتفاقيات رسوم حمل سنوية ثابت قدها 30 مليون جنيه إسترليني للقنوات نظراً لأن كلاً من مزوديها قادرين على تأمين دفعات إضافية إذا ما حققت هذه القنوات أهدافاً متعلقةً بالأداء. لا يوجد حالياً ما يشير إلى ما إذا كان الاتفاق الجديد يشمل محتوىً إضافياً كالفيديو عند الطلب والدقة العالية ، كذلك الذي سبق أن قدمته بي سكاي بي. وقد وافقت كل من بي سكاي بي و فيرجين ميديا على إنهاء الدعاوى القضائية بالمحكمة العليا ضد بعضهما بشأن معاليم الحمل التي تخص قنواتهما الأساسية. 

        السؤال: ماسم الشركة التي وافقت على إنهاء دعواها القضائية ضد بي سكاي بي بالمحكمة العليا؟ 

        الإجابة:
"""],
    ["""Answer the question based on the context below:\n
Context:\n
For other people named Joseph Aoun, see Joseph Aoun (disambiguation).
His Excellency
Joseph Aoun
جوزيف عون

Aoun in 2025
14th President of Lebanon
Incumbent
Assumed office
9 January 2025
Prime Minister	Najib Mikati
Nawaf Salam
Preceded by	Najib Mikati (acting)
Michel Aoun
14th Commander of the Lebanese Armed Forces
Incumbent
Assumed office
8 March 2017
President	
Michel Aoun
Najib Mikati (acting)
Himself
Preceded by	Jean Kahwaji
Personal details
Born	10 January 1964 (age 61)
Sin el Fil, Mount Lebanon, Lebanon
Spouse	Nehmat Nehmeh
Children	2
Education	Lebanese American University (BA)
Lebanese Army Military Acad.
Military service
Allegiance	 Lebanon
Branch	Lebanese Army
Service years	1983–present
Rank	 General
Wars	Lebanese Civil War
Syrian civil war spillover in Lebanon
Joseph Khalil Aoun (/aʊn/; Arabic: جوزيف خليل عون; born 10 January 1964) is a Lebanese politician and general who became the 14th president of Lebanon on 9 January 2025.[1][2] He was appointed as the 14th Commander of the Lebanese Armed Forces in 2017.

Early life and education
Aoun was born on 10 January 1964, in the Beirut suburb of Sin el-Fil in the Metn District, the child of Hoda Ibrahim Makhlouta and Khalil Aoun.[3] He completed secondary school at the Collège des Frères Mont La Salle. His family is originally from the town of Al-Aaishiyah, Southern Lebanon.

Aoun enrolled at the Lebanese American University to pursue a bachelor's degree in political science and international affairs, which he earned in 2007. He also holds a bachelor's degree in military science from the Lebanese Army Military Academy.[4][5][6]

Military career
Aoun joined the Lebanese army in 1983. He trained abroad, especially in the United States and Syria. He also underwent counter-terrorism training in the United States in 2008 and Lebanon in 2013. He became head of the army's 9th Infantry Brigade in 2015.

Lebanese Civil War
In 1990, Aoun served as a lieutenant under the command of the Lebanese Army's Fawj al-Maghaweer (Arabic: فوج المغاوير, The Commando Regiment) under leader Bassam Gergi at the Adma barracks. During the Adma Battle in the Civil War's Liberation War's Elimination War, Gergi was killed and Aoun took over leadership within the Maghaweer group.[7][8]

Commander of Lebanese Armed Forces
In 2015, Aoun was appointed commander of the 9th Brigade deployed on the border with Israel. On 8 March 2017, the Lebanese government appointed him commander-in-chief of the Lebanese Armed Forces (LAF), replacing Jean Kahwaji.[9]

Aoun led battles against the Islamic State campaign in eastern Lebanon, where hundreds of Islamic State and Al-Nusra Front militants were entrenched on the border with Syria.[5] On 19 August 2017, he commanded the Jroud Dawn Operation, a successful offensive to expel the militants from their strongholds.[10]

Following protests in Lebanon and the political deadlock, General Aoun spoke out on 8 March 2021 criticising the Lebanese liquidity crisis and its impact on the military. His speech went viral on social media.[11]

On 15 December 2023, the Lebanese parliament voted to extend Aoun's term for one year, which was mainly endorsed by the Lebanese Opposition, the Amal Movement and the Progressive Socialist Party.[12] During this time, he led the LAF through the 2024 Israeli invasion of Lebanon. On 28 November 2024, parliament voted to extend his term a second time.[13]
        \n
        Question: is Joseph Aoun the president of Syria?\n
        Answer:
    """]
    ],
    cache_examples=False,
    type="messages",
    description=DESCRIPTION,
    css_paths="style.css",
    fill_height=True,
)


if __name__ == "__main__":
    demo.queue(max_size=20).launch()