awacke1 commited on
Commit
79e45b4
·
verified ·
1 Parent(s): 417cd22

Create app.py

Browse files
Files changed (1) hide show
  1. app.py +1196 -0
app.py ADDED
@@ -0,0 +1,1196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import streamlit.components.v1 as components
3
+ import os
4
+ import json
5
+ import random
6
+ import base64
7
+ import glob
8
+ import math
9
+ import openai
10
+ import pytz
11
+ import re
12
+ import requests
13
+ import textract
14
+ import time
15
+ import zipfile
16
+ import huggingface_hub
17
+ import dotenv
18
+ from audio_recorder_streamlit import audio_recorder
19
+ from bs4 import BeautifulSoup
20
+ from collections import deque
21
+ from datetime import datetime
22
+ from dotenv import load_dotenv
23
+ from huggingface_hub import InferenceClient
24
+ from io import BytesIO
25
+ from openai import ChatCompletion
26
+ from PyPDF2 import PdfReader
27
+ from templates import bot_template, css, user_template
28
+ from xml.etree import ElementTree as ET
29
+ from PIL import Image
30
+ from urllib.parse import quote # Ensure this import is included
31
+
32
+ # Set page configuration with a title and favicon
33
+ st.set_page_config(
34
+ page_title="📖🔍WordGameAI",
35
+ page_icon="🔍📖",
36
+ layout="wide",
37
+ initial_sidebar_state="expanded",
38
+ menu_items={
39
+ 'Get Help': 'https://huggingface.co/awacke1',
40
+ 'Report a bug': "https://huggingface.co/spaces/awacke1/WebDataDownload",
41
+ 'About': "# Midjourney: https://discord.com/channels/@me/997514686608191558"
42
+ }
43
+ )
44
+
45
+ #PromptPrefix = 'Create a markdown outline and table with appropriate emojis for top ten graphic novel plotlines where you are defining the method steps of play for topic of '
46
+ #PromptPrefix2 = 'Create a streamlit python app. Show full code listing. Create a UI implementing each feature creatively with python, streamlit, using variables and smart tables with word and idiom keys, creating reusable dense functions with graphic novel entity parameters, and data driven app with python libraries and streamlit components for Javascript and HTML5. Use appropriate emojis for labels to summarize and list parts, function, conditions for topic: '
47
+
48
+ # Prompts for App, for App Product, and App Product Code
49
+ PromptPrefix = 'Create a word game rule set and background story with streamlit markdown outlines and tables with appropriate emojis for methodical step by step rules defining the game play rules. Use story structure architect rules to plan, structure and write three dramatic situations to include in the word game rules matching the theme for topic of '
50
+ PromptPrefix2 = 'Create a streamlit python user app with full code listing to create a UI implementing the plans, structure, situations and tables as python functions creating a word game with parts of speech and humorous word play which operates like word game rules and creates a compelling fun story using streamlit to create user interface elements like emoji buttons, sliders, drop downs, and data interfaces like dataframes to show tables, session_state to track inventory, character advancement and experience, locations, file_uploader to allow the user to add images which are saved and referenced shown in gallery, camera_input to take character picture, on_change = function callbacks with continual running plots that change when you change data or click a button, randomness and word and letter rolls using emojis and st.markdown, st.expander for groupings and clusters of things, st.columns and other UI controls in streamlit as a game. Create inline data tables and list dictionaries for entities implemented as variables for the word game rule entities and stats. Design it as a fun data driven game app and show full python code listing for this ruleset and thematic story plot line: '
51
+ PromptPrefix3 = 'Create a HTML5 aframe and javascript app using appropriate libraries to create a word game simulation with advanced libraries like aframe to render 3d scenes creating moving entities that stay within a bounding box but show text and animation in 3d for inventory, components and story entities. Show full code listing. Add a list of new random entities say 3 of a few different types to any list appropriately and use emojis to make things easier and fun to read. Use appropriate emojis in labels. Create the UI to implement storytelling in the style of a dungeon master, with features using three emoji appropriate text plot twists and recurring interesting funny fascinating and complex almost poetic named characters with genius traits and file IO, randomness, ten point choice lists, math distribution tradeoffs, witty humorous dilemnas with emoji , rewards, variables, reusable functions with parameters, and data driven app with python libraries and streamlit components for Javascript and HTML5. Use appropriate emojis for labels to summarize and list parts, function, conditions for topic:'
52
+
53
+
54
+ # Function to display the entire glossary in a grid format with links
55
+ def display_glossary_grid(roleplaying_glossary):
56
+ search_urls = {
57
+ "📖": lambda k: f"https://en.wikipedia.org/wiki/{quote(k)}",
58
+ "🔍": lambda k: f"https://www.google.com/search?q={quote(k)}",
59
+ "▶️": lambda k: f"https://www.youtube.com/results?search_query={quote(k)}",
60
+ "🔎": lambda k: f"https://www.bing.com/search?q={quote(k)}",
61
+ "🐦": lambda k: f"https://twitter.com/search?q={quote(k)}",
62
+ "🎲": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q={quote(k)}", # this url plus query!
63
+ "🃏": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q=For {quote(k)} {quote(PromptPrefix)}", # this url plus query!
64
+ "📚": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q=For {quote(k)} {quote(PromptPrefix2)}", # this url plus query!
65
+ "🔬": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q=For {quote(k)} {quote(PromptPrefix3)}", # this url plus query!
66
+ }
67
+
68
+ for category, details in roleplaying_glossary.items():
69
+ st.write(f"### {category}")
70
+ cols = st.columns(len(details)) # Create dynamic columns based on the number of games
71
+ for idx, (game, terms) in enumerate(details.items()):
72
+ with cols[idx]:
73
+ st.markdown(f"#### {game}")
74
+ for term in terms:
75
+ links_md = ' '.join([f"[{emoji}]({url(term)})" for emoji, url in search_urls.items()])
76
+ st.markdown(f"{term} {links_md}", unsafe_allow_html=True)
77
+ def display_glossary_entity(k):
78
+ search_urls = {
79
+ "📖": lambda k: f"https://en.wikipedia.org/wiki/{quote(k)}",
80
+ "🔍": lambda k: f"https://www.google.com/search?q={quote(k)}",
81
+ "▶️": lambda k: f"https://www.youtube.com/results?search_query={quote(k)}",
82
+ "🔎": lambda k: f"https://www.bing.com/search?q={quote(k)}",
83
+ "🐦": lambda k: f"https://twitter.com/search?q={quote(k)}",
84
+ "🎲": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q={quote(k)}", # this url plus query!
85
+ "🃏": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q=For {quote(k)} {quote(PromptPrefix)}", # this url plus query!
86
+ "📚": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q=For {quote(k)} {quote(PromptPrefix2)}", # this url plus query!
87
+ "🔬": lambda k: f"https://huggingface.co/spaces/awacke1/WordGameAI?q=For {quote(k)} {quote(PromptPrefix3)}", # this url plus query!
88
+ }
89
+ links_md = ' '.join([f"[{emoji}]({url(k)})" for emoji, url in search_urls.items()])
90
+ st.markdown(f"{k} {links_md}", unsafe_allow_html=True)
91
+
92
+
93
+
94
+ st.markdown('''### 📖✨🔍 WordGameAI ''')
95
+ with st.expander("Help / About 📚", expanded=False):
96
+ st.markdown('''
97
+ - 🚀 **Unlock Words:** Elevate your vocabulary with AI. Turns words into thrilling experiences.
98
+ - 📚 **Features:** Creates extensive glossaries & exciting challenges.
99
+ - 🧙‍♂️ **Experience:** Become a word wizard, boost your language skills.
100
+ - 🔎 **Query Use:** Input `?q=Palindrome` or `?query=Anagram` in URL for new challenges.
101
+ ''')
102
+
103
+
104
+ roleplaying_glossary = {
105
+ "👨‍👩‍👧‍👦 Top Family Games": {
106
+ "Big Easy Busket": ["New Orleans culture", "Band formation", "Song performance", "Location strategy", "Diversity celebration", "3-day gameplay"],
107
+ "Bonanza": [
108
+ "Bean planting and harvesting",
109
+ "Bid and trade interaction",
110
+ "Quirky card artwork",
111
+ "Hand management",
112
+ "Negotiation skills",
113
+ "Set collecting",
114
+ "Fun with large groups",
115
+ "Laughter and enjoyment"
116
+ ],
117
+ "Love Letter": [
118
+ "Valentine's Day theme",
119
+ "Simple gameplay mechanics",
120
+ "Card effects and strategy",
121
+ "Deduction to find love letter's sender",
122
+ "Take that elements",
123
+ "Fun for celebrating love",
124
+ "Engagement and elimination",
125
+ "Quick and engaging play"
126
+ ],
127
+ "The Novel Shogun": [
128
+ "Japanese History 1600s",
129
+ "Perrigrine Falcon",
130
+ "Yellow Nape Amazon Parrot",
131
+ "Bill Ackman on Investing",
132
+ "Portugal History 1600s",
133
+ "England History 1600s",
134
+ "Building a Board with Different Points of View",
135
+ "Canadian Pacific Railway",
136
+ "Merchant Ships and Pilots"
137
+ ],
138
+ "Votes for Women": [
139
+ "World Social Justice Day theme",
140
+ "Card-driven game exploring American women's suffrage movement",
141
+ "1 to 4 player game",
142
+ "Released in 2022 by Fort Circle Games",
143
+ "Covers 1848 to 1920 suffrage movement",
144
+ "Includes competitive, cooperative, and solitary play modes",
145
+ "Engages players in the ratification or rejection of the 19th Amendment",
146
+ "Educational content on women's rights history",
147
+ "Mechanics include area majority, dice rolling, cooperative play, and campaign-driven gameplay"
148
+ ],
149
+ },
150
+ "📚 Traditional Word Games": {
151
+ "Scrabble": ["Tile placement", "Word formation", "Point scoring"],
152
+ "Boggle": ["Letter grid", "Timed word search", "Word length points"],
153
+ "Crossword Puzzles": ["Clue solving", "Word filling", "Thematic puzzles"],
154
+ "Banagrams": ["Tile shuffling", "Personal anagram puzzles", "Speed challenge"],
155
+ "Hangman": ["Word guessing", "Letter guessing", "Limited attempts"],
156
+ },
157
+ "💡 Digital Word Games": {
158
+ "Words With Friends": ["Digital Scrabble-like", "Online multiplayer", "Social interaction"],
159
+ "Wordle": ["Daily word guessing", "Limited tries", "Shareable results"],
160
+ "Letterpress": ["Competitive word search", "Territory control", "Strategic letter usage"],
161
+ "Alphabear": ["Word formation", "Cute characters", "Puzzle strategy"],
162
+ },
163
+ "🎮 Game Design and Mechanics": {
164
+ "Gameplay Dynamics": ["Word discovery", "Strategic placement", "Time pressure"],
165
+ "Player Engagement": ["Daily challenges", "Leaderboards", "Community puzzles"],
166
+ "Learning and Development": ["Vocabulary building", "Spelling practice", "Cognitive skills"],
167
+ },
168
+ "🌐 Online Platforms & Tools": {
169
+ "Multiplayer Platforms": ["Real-time competition", "Asynchronous play", "Global matchmaking"],
170
+ "Educational Tools": ["Learning modes", "Progress tracking", "Skill levels"],
171
+ "Community Features": ["Forums", "Tips and tricks sharing", "Tournament organization"],
172
+ },
173
+ "🎖️ Competitive Scene": {
174
+ "Scrabble Tournaments": ["Official rules", "National and international", "Professional rankings"],
175
+ "Crossword Competitions": ["Speed solving", "Puzzle variety", "Prizes and recognition"],
176
+ "Wordle Challenges": ["Streaks", "Perfect scores", "Community leaderboards"],
177
+ },
178
+ "📚 Lore & Background": {
179
+ "History of Word Games": ["Evolution over time", "Cultural significance", "Famous games"],
180
+ "Iconic Word Game Creators": ["Creators and designers", "Inspirational stories", "Game development"],
181
+ "Word Games in Literature": ["Literary puzzles", "Wordplay in writing", "Famous examples"],
182
+ },
183
+ "🛠️ Resources & Development": {
184
+ "Game Creation Tools": ["Word game generators", "Puzzle design software", "Community mods"],
185
+ "Educational Resources": ["Vocabulary lists", "Word game strategies", "Learning methodologies"],
186
+ "Digital Platforms": ["App development", "Online game hosting", "Social media integration"],
187
+ },
188
+
189
+ }
190
+
191
+
192
+
193
+ # HTML5 based Speech Synthesis (Text to Speech in Browser)
194
+ @st.cache_resource
195
+ def SpeechSynthesis(result):
196
+ documentHTML5='''
197
+ <!DOCTYPE html>
198
+ <html>
199
+ <head>
200
+ <title>Read It Aloud</title>
201
+ <script type="text/javascript">
202
+ function readAloud() {
203
+ const text = document.getElementById("textArea").value;
204
+ const speech = new SpeechSynthesisUtterance(text);
205
+ window.speechSynthesis.speak(speech);
206
+ }
207
+ </script>
208
+ </head>
209
+ <body>
210
+ <h1>🔊 Read It Aloud</h1>
211
+ <textarea id="textArea" rows="10" cols="80">
212
+ '''
213
+ documentHTML5 = documentHTML5 + result
214
+ documentHTML5 = documentHTML5 + '''
215
+ </textarea>
216
+ <br>
217
+ <button onclick="readAloud()">🔊 Read Aloud</button>
218
+ </body>
219
+ </html>
220
+ '''
221
+ components.html(documentHTML5, width=1280, height=300)
222
+
223
+
224
+ @st.cache_resource
225
+ def get_table_download_link(file_path):
226
+ with open(file_path, 'r') as file:
227
+ data = file.read()
228
+ b64 = base64.b64encode(data.encode()).decode()
229
+ file_name = os.path.basename(file_path)
230
+ ext = os.path.splitext(file_name)[1] # get the file extension
231
+ if ext == '.txt':
232
+ mime_type = 'text/plain'
233
+ elif ext == '.py':
234
+ mime_type = 'text/plain'
235
+ elif ext == '.xlsx':
236
+ mime_type = 'text/plain'
237
+ elif ext == '.csv':
238
+ mime_type = 'text/plain'
239
+ elif ext == '.htm':
240
+ mime_type = 'text/html'
241
+ elif ext == '.md':
242
+ mime_type = 'text/markdown'
243
+ elif ext == '.wav':
244
+ mime_type = 'audio/wav'
245
+ else:
246
+ mime_type = 'application/octet-stream' # general binary data type
247
+ href = f'<a href="data:{mime_type};base64,{b64}" target="_blank" download="{file_name}">{file_name}</a>'
248
+ return href
249
+
250
+
251
+ @st.cache_resource
252
+ def create_zip_of_files(files): # ----------------------------------
253
+ zip_name = "WordGameAI.zip"
254
+ with zipfile.ZipFile(zip_name, 'w') as zipf:
255
+ for file in files:
256
+ zipf.write(file)
257
+ return zip_name
258
+ @st.cache_resource
259
+ def get_zip_download_link(zip_file):
260
+ with open(zip_file, 'rb') as f:
261
+ data = f.read()
262
+ b64 = base64.b64encode(data).decode()
263
+ href = f'<a href="data:application/zip;base64,{b64}" download="{zip_file}">Download All</a>'
264
+ return href # ----------------------------------
265
+
266
+
267
+ def FileSidebar():
268
+ # ----------------------------------------------------- File Sidebar for Jump Gates ------------------------------------------
269
+ # Compose a file sidebar of markdown md files:
270
+ all_files = glob.glob("*.md")
271
+ all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 10] # exclude files with short names
272
+ all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order
273
+ if st.sidebar.button("🗑 Delete All Text"):
274
+ for file in all_files:
275
+ os.remove(file)
276
+ st.experimental_rerun()
277
+ if st.sidebar.button("⬇️ Download All"):
278
+ zip_file = create_zip_of_files(all_files)
279
+ st.sidebar.markdown(get_zip_download_link(zip_file), unsafe_allow_html=True)
280
+ file_contents=''
281
+ next_action=''
282
+ for file in all_files:
283
+ col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed
284
+ with col1:
285
+ if st.button("🌐", key="md_"+file): # md emoji button
286
+ with open(file, 'r') as f:
287
+ file_contents = f.read()
288
+ next_action='md'
289
+ with col2:
290
+ st.markdown(get_table_download_link(file), unsafe_allow_html=True)
291
+ with col3:
292
+ if st.button("📂", key="open_"+file): # open emoji button
293
+ with open(file, 'r') as f:
294
+ file_contents = f.read()
295
+ next_action='open'
296
+ with col4:
297
+ if st.button("🔍", key="read_"+file): # search emoji button
298
+ with open(file, 'r') as f:
299
+ file_contents = f.read()
300
+ next_action='search'
301
+ with col5:
302
+ if st.button("🗑", key="delete_"+file):
303
+ os.remove(file)
304
+ st.experimental_rerun()
305
+
306
+
307
+ if len(file_contents) > 0:
308
+ if next_action=='open':
309
+ file_content_area = st.text_area("File Contents:", file_contents, height=500)
310
+ try:
311
+ if st.button("🔍", key="filecontentssearch"):
312
+ search_glossary(file_content_area)
313
+ except:
314
+ st.markdown('GPT is sleeping. Restart ETA 30 seconds.')
315
+
316
+ if next_action=='md':
317
+ st.markdown(file_contents)
318
+ buttonlabel = '🔍Run'
319
+ if st.button(key='Runmd', label = buttonlabel):
320
+ user_prompt = file_contents
321
+ try:
322
+ search_glossary(file_contents)
323
+ except:
324
+ st.markdown('GPT is sleeping. Restart ETA 30 seconds.')
325
+
326
+ if next_action=='search':
327
+ file_content_area = st.text_area("File Contents:", file_contents, height=500)
328
+ user_prompt = file_contents
329
+ try:
330
+ search_glossary(file_contents)
331
+ except:
332
+ st.markdown('GPT is sleeping. Restart ETA 30 seconds.')
333
+ # ----------------------------------------------------- File Sidebar for Jump Gates ------------------------------------------
334
+
335
+
336
+ FileSidebar()
337
+
338
+
339
+
340
+ # ---- Art Card Sidebar with Random Selection of image:
341
+ def get_image_as_base64(url):
342
+ response = requests.get(url)
343
+ if response.status_code == 200:
344
+ # Convert the image to base64
345
+ return base64.b64encode(response.content).decode("utf-8")
346
+ else:
347
+ return None
348
+ def create_download_link(filename, base64_str):
349
+ href = f'<a href="data:file/png;base64,{base64_str}" download="{filename}">Download Image</a>'
350
+ return href
351
+ image_urls = [
352
+ "https://cdn-uploads.huggingface.co/production/uploads/620630b603825909dcbeba35/gv1xmIiXh1NGTeeV-cYF2.png",
353
+ "https://cdn-uploads.huggingface.co/production/uploads/620630b603825909dcbeba35/2YsnDyc_nDNW71PPKozdN.png",
354
+ "https://cdn-uploads.huggingface.co/production/uploads/620630b603825909dcbeba35/G_GkRD_IT3f14K7gWlbwi.png",
355
+ ]
356
+ selected_image_url = random.choice(image_urls)
357
+ selected_image_base64 = get_image_as_base64(selected_image_url)
358
+ if selected_image_base64 is not None:
359
+ with st.sidebar:
360
+ st.markdown("""### Word Game AI""")
361
+ st.markdown(f"![image](data:image/png;base64,{selected_image_base64})")
362
+ else:
363
+ st.sidebar.write("Failed to load the image.")
364
+ # ---- Art Card Sidebar with random selection of image.
365
+
366
+
367
+
368
+
369
+
370
+ # Ensure the directory for storing scores exists
371
+ score_dir = "scores"
372
+ os.makedirs(score_dir, exist_ok=True)
373
+
374
+ # Function to generate a unique key for each button, including an emoji
375
+ def generate_key(label, header, idx):
376
+ return f"{header}_{label}_{idx}_key"
377
+
378
+ # Function to increment and save score
379
+ def update_score(key, increment=1):
380
+ score_file = os.path.join(score_dir, f"{key}.json")
381
+ if os.path.exists(score_file):
382
+ with open(score_file, "r") as file:
383
+ score_data = json.load(file)
384
+ else:
385
+ score_data = {"clicks": 0, "score": 0}
386
+
387
+ score_data["clicks"] += 1
388
+ score_data["score"] += increment
389
+
390
+ with open(score_file, "w") as file:
391
+ json.dump(score_data, file)
392
+
393
+ return score_data["score"]
394
+
395
+ # Function to load score
396
+ def load_score(key):
397
+ score_file = os.path.join(score_dir, f"{key}.json")
398
+ if os.path.exists(score_file):
399
+ with open(score_file, "r") as file:
400
+ score_data = json.load(file)
401
+ return score_data["score"]
402
+ return 0
403
+
404
+ @st.cache_resource
405
+ def search_glossary(query): # 🔍Run--------------------------------------------------------
406
+ for category, terms in roleplaying_glossary.items():
407
+ if query.lower() in (term.lower() for term in terms):
408
+ st.markdown(f"#### {category}")
409
+ st.write(f"- {query}")
410
+
411
+ all=""
412
+
413
+ #query2 = PromptPrefix + query
414
+ query2 = query
415
+ response = chat_with_model(query2)
416
+ all = query + ' ' + response
417
+ filename = generate_filename(response, "md")
418
+ create_file(filename, query, response, should_save)
419
+
420
+
421
+
422
+ #query3 = PromptPrefix2 + query + ' for story outline of method steps: ' + response # Add prompt preface for coding task behavior
423
+ #response2 = chat_with_model(query3)
424
+
425
+ #query4 = PromptPrefix3 + query + ' using this streamlit python programspecification to define features. Create entities for each variable and generate UI with HTML5 and JS that matches the streamlit program: ' + response2 # Add prompt preface for coding task behavior
426
+ #response3 = chat_with_model(query4)
427
+
428
+ #all = query + ' ' + response + ' ' + response2 + ' ' + response3
429
+
430
+ #filename = generate_filename(all, "md")
431
+ #create_file(filename, query, all, should_save)
432
+
433
+ SpeechSynthesis(all)
434
+ return all # 🔍Run--------------------------------------------------------
435
+
436
+
437
+ # Function to display the glossary in a structured format
438
+ def display_glossary(glossary, area):
439
+ if area in glossary:
440
+ st.subheader(f"📘 Glossary for {area}")
441
+ for game, terms in glossary[area].items():
442
+ st.markdown(f"### {game}")
443
+ for idx, term in enumerate(terms, start=1):
444
+ st.write(f"{idx}. {term}")
445
+
446
+
447
+ # Function to display the entire glossary in a grid format with links
448
+ def display_glossary_grid(roleplaying_glossary):
449
+ search_urls = {
450
+ "📖": lambda k: f"https://en.wikipedia.org/wiki/{quote(k)}",
451
+ "🔍": lambda k: f"https://www.google.com/search?q={quote(k)}",
452
+ "▶️": lambda k: f"https://www.youtube.com/results?search_query={quote(k)}",
453
+ "🔎": lambda k: f"https://www.bing.com/search?q={quote(k)}",
454
+ "🎲": lambda k: f"https://huggingface.co/spaces/awacke1/MixableWordGameAI?q={quote(k)}", # this url plus query!
455
+
456
+ }
457
+
458
+ for category, details in roleplaying_glossary.items():
459
+ st.write(f"### {category}")
460
+ cols = st.columns(len(details)) # Create dynamic columns based on the number of games
461
+ for idx, (game, terms) in enumerate(details.items()):
462
+ with cols[idx]:
463
+ st.markdown(f"#### {game}")
464
+ for term in terms:
465
+ links_md = ' '.join([f"[{emoji}]({url(term)})" for emoji, url in search_urls.items()])
466
+ st.markdown(f"{term} {links_md}", unsafe_allow_html=True)
467
+
468
+
469
+ @st.cache_resource
470
+ def display_videos_and_links():
471
+ video_files = [f for f in os.listdir('.') if f.endswith('.mp4')]
472
+ if not video_files:
473
+ st.write("No MP4 videos found in the current directory.")
474
+ return
475
+
476
+ video_files_sorted = sorted(video_files, key=lambda x: len(x.split('.')[0]))
477
+
478
+ cols = st.columns(2) # Define 2 columns outside the loop
479
+ col_index = 0 # Initialize column index
480
+
481
+ for video_file in video_files_sorted:
482
+ with cols[col_index % 2]: # Use modulo 2 to alternate between the first and second column
483
+ # Embedding video with autoplay and loop using HTML
484
+ #video_html = ("""<video width="100%" loop autoplay> <source src="{video_file}" type="video/mp4">Your browser does not support the video tag.</video>""")
485
+ #st.markdown(video_html, unsafe_allow_html=True)
486
+ k = video_file.split('.')[0] # Assumes keyword is the file name without extension
487
+ st.video(video_file, format='video/mp4', start_time=0)
488
+ display_glossary_entity(k)
489
+ col_index += 1 # Increment column index to place the next video in the next column
490
+
491
+ @st.cache_resource
492
+ def display_images_and_wikipedia_summaries():
493
+ image_files = [f for f in os.listdir('.') if f.endswith('.png')]
494
+ if not image_files:
495
+ st.write("No PNG images found in the current directory.")
496
+ return
497
+ image_files_sorted = sorted(image_files, key=lambda x: len(x.split('.')[0]))
498
+ grid_sizes = [len(f.split('.')[0]) for f in image_files_sorted]
499
+ col_sizes = ['small' if size <= 4 else 'medium' if size <= 8 else 'large' for size in grid_sizes]
500
+ num_columns_map = {"small": 4, "medium": 3, "large": 2}
501
+ current_grid_size = 0
502
+ for image_file, col_size in zip(image_files_sorted, col_sizes):
503
+ if current_grid_size != num_columns_map[col_size]:
504
+ cols = st.columns(num_columns_map[col_size])
505
+ current_grid_size = num_columns_map[col_size]
506
+ col_index = 0
507
+ with cols[col_index % current_grid_size]:
508
+ image = Image.open(image_file)
509
+ st.image(image, caption=image_file, use_column_width=True)
510
+ k = image_file.split('.')[0] # Assumes keyword is the file name without extension
511
+ display_glossary_entity(k)
512
+
513
+ def get_all_query_params(key):
514
+ return st.query_params().get(key, [])
515
+
516
+ def clear_query_params():
517
+ st.query_params()
518
+
519
+ # Function to display content or image based on a query
520
+ @st.cache_resource
521
+ def display_content_or_image(query):
522
+ for category, terms in transhuman_glossary.items():
523
+ for term in terms:
524
+ if query.lower() in term.lower():
525
+ st.subheader(f"Found in {category}:")
526
+ st.write(term)
527
+ return True # Return after finding and displaying the first match
528
+ image_dir = "images" # Example directory where images are stored
529
+ image_path = f"{image_dir}/{query}.png" # Construct image path with query
530
+ if os.path.exists(image_path):
531
+ st.image(image_path, caption=f"Image for {query}")
532
+ return True
533
+ st.warning("No matching content or image found.")
534
+ return False
535
+
536
+
537
+ game_emojis = {
538
+ "Dungeons and Dragons": "🐉",
539
+ "Call of Cthulhu": "🐙",
540
+ "GURPS": "🎲",
541
+ "Pathfinder": "🗺️",
542
+ "Kindred of the East": "🌅",
543
+ "Changeling": "🍃",
544
+ }
545
+
546
+ topic_emojis = {
547
+ "Core Rulebooks": "📚",
548
+ "Maps & Settings": "🗺️",
549
+ "Game Mechanics & Tools": "⚙️",
550
+ "Monsters & Adversaries": "👹",
551
+ "Campaigns & Adventures": "📜",
552
+ "Creatives & Assets": "🎨",
553
+ "Game Master Resources": "🛠️",
554
+ "Lore & Background": "📖",
555
+ "Character Development": "🧍",
556
+ "Homebrew Content": "🔧",
557
+ "General Topics": "🌍",
558
+ }
559
+
560
+ # Adjusted display_buttons_with_scores function
561
+ def display_buttons_with_scores():
562
+ for category, games in roleplaying_glossary.items():
563
+ category_emoji = topic_emojis.get(category, "🔍") # Default to search icon if no match
564
+ st.markdown(f"## {category_emoji} {category}")
565
+ for game, terms in games.items():
566
+ game_emoji = game_emojis.get(game, "🎮") # Default to generic game controller if no match
567
+ for term in terms:
568
+ key = f"{category}_{game}_{term}".replace(' ', '_').lower()
569
+ score = load_score(key)
570
+ if st.button(f"{game_emoji} {term} {score}", key=key):
571
+ update_score(key)
572
+ # Create a dynamic query incorporating emojis and formatting for clarity
573
+ query_prefix = f"{category_emoji} {game_emoji} **{game} - {category}:**"
574
+ # ----------------------------------------------------------------------------------------------
575
+ #query_body = f"Create a detailed outline for **{term}** with subpoints highlighting key aspects, using emojis for visual engagement. Include step-by-step rules and boldface important entities and ruleset elements."
576
+ query_body = f"Create a streamlit python app.py that produces a detailed markdown outline and emoji laden user interface with labels with the entity name and emojis in all labels with a set of streamlit UI components with drop down lists and dataframes and buttons with expander and sidebar for the app to run the data as default values mostly in text boxes. Feature a 3 point outline sith 3 subpoints each where each line has about six words describing this and also contain appropriate emoji for creating sumamry of all aspeccts of this topic. an outline for **{term}** with subpoints highlighting key aspects, using emojis for visual engagement. Include step-by-step rules and boldface important entities and ruleset elements."
577
+ response = search_glossary(query_prefix + query_body)
578
+
579
+
580
+ def fetch_wikipedia_summary(keyword):
581
+ # Placeholder function for fetching Wikipedia summaries
582
+ # In a real app, you might use requests to fetch from the Wikipedia API
583
+ return f"Summary for {keyword}. For more information, visit Wikipedia."
584
+
585
+ def create_search_url_youtube(keyword):
586
+ base_url = "https://www.youtube.com/results?search_query="
587
+ return base_url + keyword.replace(' ', '+')
588
+
589
+ def create_search_url_bing(keyword):
590
+ base_url = "https://www.bing.com/search?q="
591
+ return base_url + keyword.replace(' ', '+')
592
+
593
+ def create_search_url_wikipedia(keyword):
594
+ base_url = "https://www.wikipedia.org/search-redirect.php?family=wikipedia&language=en&search="
595
+ return base_url + keyword.replace(' ', '+')
596
+
597
+ def create_search_url_google(keyword):
598
+ base_url = "https://www.google.com/search?q="
599
+ return base_url + keyword.replace(' ', '+')
600
+
601
+ def create_search_url_ai(keyword):
602
+ base_url = "https://huggingface.co/spaces/awacke1/MixableWordGameAI?q="
603
+ return base_url + keyword.replace(' ', '+')
604
+
605
+ def display_images_and_wikipedia_summaries():
606
+ image_files = [f for f in os.listdir('.') if f.endswith('.png')]
607
+ if not image_files:
608
+ st.write("No PNG images found in the current directory.")
609
+ return
610
+
611
+ for image_file in image_files:
612
+ image = Image.open(image_file)
613
+ st.image(image, caption=image_file, use_column_width=True)
614
+
615
+ keyword = image_file.split('.')[0] # Assumes keyword is the file name without extension
616
+
617
+ # Display Wikipedia and Google search links
618
+ wikipedia_url = create_search_url_wikipedia(keyword)
619
+ google_url = create_search_url_google(keyword)
620
+ youtube_url = create_search_url_youtube(keyword)
621
+ bing_url = create_search_url_bing(keyword)
622
+ ai_url = create_search_url_ai(keyword)
623
+
624
+
625
+ links_md = f"""
626
+ [Wikipedia]({wikipedia_url}) |
627
+ [Google]({google_url}) |
628
+ [YouTube]({youtube_url}) |
629
+ [Bing]({bing_url}) |
630
+ [AI]({ai_url})
631
+ """
632
+ st.markdown(links_md)
633
+
634
+
635
+ def get_all_query_params(key):
636
+ return st.query_params().get(key, [])
637
+
638
+ def clear_query_params():
639
+ st.query_params()
640
+
641
+
642
+
643
+ # My Inference API Copy
644
+ API_URL = 'https://qe55p8afio98s0u3.us-east-1.aws.endpoints.huggingface.cloud' # Dr Llama
645
+ # Meta's Original - Chat HF Free Version:
646
+ #API_URL = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-chat-hf"
647
+ API_KEY = os.getenv('API_KEY')
648
+ MODEL1="meta-llama/Llama-2-7b-chat-hf"
649
+ MODEL1URL="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf"
650
+ HF_KEY = os.getenv('HF_KEY')
651
+ headers = {
652
+ "Authorization": f"Bearer {HF_KEY}",
653
+ "Content-Type": "application/json"
654
+ }
655
+ key = os.getenv('OPENAI_API_KEY')
656
+ prompt = "...."
657
+ should_save = st.sidebar.checkbox("💾 Save", value=True, help="Save your session data.")
658
+
659
+
660
+
661
+
662
+ # 3. Stream Llama Response
663
+ # @st.cache_resource
664
+ def StreamLLMChatResponse(prompt):
665
+ try:
666
+ endpoint_url = API_URL
667
+ hf_token = API_KEY
668
+ st.write('Running client ' + endpoint_url)
669
+ client = InferenceClient(endpoint_url, token=hf_token)
670
+ gen_kwargs = dict(
671
+ max_new_tokens=512,
672
+ top_k=30,
673
+ top_p=0.9,
674
+ temperature=0.2,
675
+ repetition_penalty=1.02,
676
+ stop_sequences=["\nUser:", "<|endoftext|>", "</s>"],
677
+ )
678
+ stream = client.text_generation(prompt, stream=True, details=True, **gen_kwargs)
679
+ report=[]
680
+ res_box = st.empty()
681
+ collected_chunks=[]
682
+ collected_messages=[]
683
+ allresults=''
684
+ for r in stream:
685
+ if r.token.special:
686
+ continue
687
+ if r.token.text in gen_kwargs["stop_sequences"]:
688
+ break
689
+ collected_chunks.append(r.token.text)
690
+ chunk_message = r.token.text
691
+ collected_messages.append(chunk_message)
692
+ try:
693
+ report.append(r.token.text)
694
+ if len(r.token.text) > 0:
695
+ result="".join(report).strip()
696
+ res_box.markdown(f'*{result}*')
697
+
698
+ except:
699
+ st.write('Stream llm issue')
700
+ SpeechSynthesis(result)
701
+ return result
702
+ except:
703
+ st.write('Llama model is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).')
704
+
705
+ # 4. Run query with payload
706
+ def query(payload):
707
+ response = requests.post(API_URL, headers=headers, json=payload)
708
+ st.markdown(response.json())
709
+ return response.json()
710
+ def get_output(prompt):
711
+ return query({"inputs": prompt})
712
+
713
+ # 5. Auto name generated output files from time and content
714
+ def generate_filename(prompt, file_type):
715
+ central = pytz.timezone('US/Central')
716
+ safe_date_time = datetime.now(central).strftime("%m%d_%H%M")
717
+ replaced_prompt = prompt.replace(" ", "_").replace("\n", "_")
718
+ safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:255] # 255 is linux max, 260 is windows max
719
+ #safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:45]
720
+ return f"{safe_date_time}_{safe_prompt}.{file_type}"
721
+
722
+ # 6. Speech transcription via OpenAI service
723
+ def transcribe_audio(openai_key, file_path, model):
724
+ openai.api_key = openai_key
725
+ OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions"
726
+ headers = {
727
+ "Authorization": f"Bearer {openai_key}",
728
+ }
729
+ with open(file_path, 'rb') as f:
730
+ data = {'file': f}
731
+ st.write('STT transcript ' + OPENAI_API_URL)
732
+ response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model})
733
+ if response.status_code == 200:
734
+ st.write(response.json())
735
+ chatResponse = chat_with_model(response.json().get('text'), '') # *************************************
736
+ transcript = response.json().get('text')
737
+ filename = generate_filename(transcript, 'txt')
738
+ response = chatResponse
739
+ user_prompt = transcript
740
+ create_file(filename, user_prompt, response, should_save)
741
+ return transcript
742
+ else:
743
+ st.write(response.json())
744
+ st.error("Error in API call.")
745
+ return None
746
+
747
+ # 7. Auto stop on silence audio control for recording WAV files
748
+ def save_and_play_audio(audio_recorder):
749
+ audio_bytes = audio_recorder(key='audio_recorder')
750
+ if audio_bytes:
751
+ filename = generate_filename("Recording", "wav")
752
+ with open(filename, 'wb') as f:
753
+ f.write(audio_bytes)
754
+ st.audio(audio_bytes, format="audio/wav")
755
+ return filename
756
+ return None
757
+
758
+ # 8. File creator that interprets type and creates output file for text, markdown and code
759
+ def create_file(filename, prompt, response, should_save=True):
760
+ if not should_save:
761
+ return
762
+ base_filename, ext = os.path.splitext(filename)
763
+ if ext in ['.txt', '.htm', '.md']:
764
+ with open(f"{base_filename}.md", 'w') as file:
765
+ try:
766
+ content = prompt.strip() + '\r\n' + response
767
+ file.write(content)
768
+ except:
769
+ st.write('.')
770
+
771
+ #has_python_code = re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response)
772
+ #has_python_code = bool(re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response))
773
+ #if has_python_code:
774
+ # python_code = re.findall(r"```python([\s\S]*?)```", response)[0].strip()
775
+ # with open(f"{base_filename}-Code.py", 'w') as file:
776
+ # file.write(python_code)
777
+ # with open(f"{base_filename}.md", 'w') as file:
778
+ # content = prompt.strip() + '\r\n' + response
779
+ # file.write(content)
780
+
781
+ def truncate_document(document, length):
782
+ return document[:length]
783
+ def divide_document(document, max_length):
784
+ return [document[i:i+max_length] for i in range(0, len(document), max_length)]
785
+
786
+ def CompressXML(xml_text):
787
+ root = ET.fromstring(xml_text)
788
+ for elem in list(root.iter()):
789
+ if isinstance(elem.tag, str) and 'Comment' in elem.tag:
790
+ elem.parent.remove(elem)
791
+ return ET.tostring(root, encoding='unicode', method="xml")
792
+
793
+ # 10. Read in and provide UI for past files
794
+ @st.cache_resource
795
+ def read_file_content(file,max_length):
796
+ if file.type == "application/json":
797
+ content = json.load(file)
798
+ return str(content)
799
+ elif file.type == "text/html" or file.type == "text/htm":
800
+ content = BeautifulSoup(file, "html.parser")
801
+ return content.text
802
+ elif file.type == "application/xml" or file.type == "text/xml":
803
+ tree = ET.parse(file)
804
+ root = tree.getroot()
805
+ xml = CompressXML(ET.tostring(root, encoding='unicode'))
806
+ return xml
807
+ elif file.type == "text/markdown" or file.type == "text/md":
808
+ md = mistune.create_markdown()
809
+ content = md(file.read().decode())
810
+ return content
811
+ elif file.type == "text/plain":
812
+ return file.getvalue().decode()
813
+ else:
814
+ return ""
815
+
816
+
817
+ # 11. Chat with GPT - Caution on quota - now favoring fastest AI pipeline STT Whisper->LLM Llama->TTS
818
+ @st.cache_resource
819
+ def chat_with_model(prompt, document_section='', model_choice='gpt-3.5-turbo'): # gpt-4-0125-preview gpt-3.5-turbo
820
+ #def chat_with_model(prompt, document_section='', model_choice='gpt-4-0125-preview'): # gpt-4-0125-preview gpt-3.5-turbo
821
+ model = model_choice
822
+ conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
823
+ conversation.append({'role': 'user', 'content': prompt})
824
+ if len(document_section)>0:
825
+ conversation.append({'role': 'assistant', 'content': document_section})
826
+ start_time = time.time()
827
+ report = []
828
+ res_box = st.empty()
829
+ collected_chunks = []
830
+ collected_messages = []
831
+
832
+ for chunk in openai.ChatCompletion.create(model=model_choice, messages=conversation, temperature=0.5, stream=True):
833
+ collected_chunks.append(chunk)
834
+ chunk_message = chunk['choices'][0]['delta']
835
+ collected_messages.append(chunk_message)
836
+ content=chunk["choices"][0].get("delta",{}).get("content")
837
+ try:
838
+ report.append(content)
839
+ if len(content) > 0:
840
+ result = "".join(report).strip()
841
+ res_box.markdown(f'*{result}*')
842
+ except:
843
+ st.write(' ')
844
+ full_reply_content = ''.join([m.get('content', '') for m in collected_messages])
845
+ st.write("Elapsed time:")
846
+ st.write(time.time() - start_time)
847
+ return full_reply_content
848
+
849
+ @st.cache_resource
850
+ def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): # gpt-4-0125-preview gpt-3.5-turbo
851
+ #def chat_with_file_contents(prompt, file_content, model_choice='gpt-4-0125-preview'): # gpt-4-0125-preview gpt-3.5-turbo
852
+ conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
853
+ conversation.append({'role': 'user', 'content': prompt})
854
+ if len(file_content)>0:
855
+ conversation.append({'role': 'assistant', 'content': file_content})
856
+ response = openai.ChatCompletion.create(model=model_choice, messages=conversation)
857
+ return response['choices'][0]['message']['content']
858
+
859
+
860
+ def extract_mime_type(file):
861
+ if isinstance(file, str):
862
+ pattern = r"type='(.*?)'"
863
+ match = re.search(pattern, file)
864
+ if match:
865
+ return match.group(1)
866
+ else:
867
+ raise ValueError(f"Unable to extract MIME type from {file}")
868
+ elif isinstance(file, streamlit.UploadedFile):
869
+ return file.type
870
+ else:
871
+ raise TypeError("Input should be a string or a streamlit.UploadedFile object")
872
+
873
+ def extract_file_extension(file):
874
+ # get the file name directly from the UploadedFile object
875
+ file_name = file.name
876
+ pattern = r".*?\.(.*?)$"
877
+ match = re.search(pattern, file_name)
878
+ if match:
879
+ return match.group(1)
880
+ else:
881
+ raise ValueError(f"Unable to extract file extension from {file_name}")
882
+
883
+ # Normalize input as text from PDF and other formats
884
+ @st.cache_resource
885
+ def pdf2txt(docs):
886
+ text = ""
887
+ for file in docs:
888
+ file_extension = extract_file_extension(file)
889
+ st.write(f"File type extension: {file_extension}")
890
+ if file_extension.lower() in ['py', 'txt', 'html', 'htm', 'xml', 'json']:
891
+ text += file.getvalue().decode('utf-8')
892
+ elif file_extension.lower() == 'pdf':
893
+ from PyPDF2 import PdfReader
894
+ pdf = PdfReader(BytesIO(file.getvalue()))
895
+ for page in range(len(pdf.pages)):
896
+ text += pdf.pages[page].extract_text() # new PyPDF2 syntax
897
+ return text
898
+
899
+ def txt2chunks(text):
900
+ text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len)
901
+ return text_splitter.split_text(text)
902
+
903
+ # Vector Store using FAISS
904
+ @st.cache_resource
905
+ def vector_store(text_chunks):
906
+ embeddings = OpenAIEmbeddings(openai_api_key=key)
907
+ return FAISS.from_texts(texts=text_chunks, embedding=embeddings)
908
+
909
+ # Memory and Retrieval chains
910
+ @st.cache_resource
911
+ def get_chain(vectorstore):
912
+ llm = ChatOpenAI()
913
+ memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)
914
+ return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory)
915
+
916
+ def process_user_input(user_question):
917
+ response = st.session_state.conversation({'question': user_question})
918
+ st.session_state.chat_history = response['chat_history']
919
+ for i, message in enumerate(st.session_state.chat_history):
920
+ template = user_template if i % 2 == 0 else bot_template
921
+ st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True)
922
+ filename = generate_filename(user_question, 'txt')
923
+ response = message.content
924
+ user_prompt = user_question
925
+ create_file(filename, user_prompt, response, should_save)
926
+
927
+ def divide_prompt(prompt, max_length):
928
+ words = prompt.split()
929
+ chunks = []
930
+ current_chunk = []
931
+ current_length = 0
932
+ for word in words:
933
+ if len(word) + current_length <= max_length:
934
+ current_length += len(word) + 1
935
+ current_chunk.append(word)
936
+ else:
937
+ chunks.append(' '.join(current_chunk))
938
+ current_chunk = [word]
939
+ current_length = len(word)
940
+ chunks.append(' '.join(current_chunk))
941
+ return chunks
942
+
943
+
944
+
945
+ API_URL_IE = f'https://tonpixzfvq3791u9.us-east-1.aws.endpoints.huggingface.cloud'
946
+ API_URL_IE = "https://api-inference.huggingface.co/models/openai/whisper-small.en"
947
+ MODEL2 = "openai/whisper-small.en"
948
+ MODEL2_URL = "https://huggingface.co/openai/whisper-small.en"
949
+ HF_KEY = st.secrets['HF_KEY']
950
+ headers = {
951
+ "Authorization": f"Bearer {HF_KEY}",
952
+ "Content-Type": "audio/wav"
953
+ }
954
+
955
+ def query(filename):
956
+ with open(filename, "rb") as f:
957
+ data = f.read()
958
+ response = requests.post(API_URL_IE, headers=headers, data=data)
959
+ return response.json()
960
+
961
+ def generate_filename(prompt, file_type):
962
+ central = pytz.timezone('US/Central')
963
+ safe_date_time = datetime.now(central).strftime("%m%d_%H%M")
964
+ replaced_prompt = prompt.replace(" ", "_").replace("\n", "_")
965
+ safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:90]
966
+ return f"{safe_date_time}_{safe_prompt}.{file_type}"
967
+
968
+ # 15. Audio recorder to Wav file
969
+ def save_and_play_audio(audio_recorder):
970
+ audio_bytes = audio_recorder()
971
+ if audio_bytes:
972
+ filename = generate_filename("Recording", "wav")
973
+ with open(filename, 'wb') as f:
974
+ f.write(audio_bytes)
975
+ st.audio(audio_bytes, format="audio/wav")
976
+ return filename
977
+
978
+ # 16. Speech transcription to file output
979
+ def transcribe_audio(filename):
980
+ output = query(filename)
981
+ return output
982
+
983
+ def whisper_main():
984
+ filename = save_and_play_audio(audio_recorder)
985
+ if filename is not None:
986
+ transcription = transcribe_audio(filename)
987
+ try:
988
+ transcript = transcription['text']
989
+ st.write(transcript)
990
+
991
+ except:
992
+ transcript=''
993
+ st.write(transcript)
994
+
995
+ st.write('Reasoning with your inputs..')
996
+ response = chat_with_model(transcript)
997
+ st.write('Response:')
998
+ st.write(response)
999
+ filename = generate_filename(response, "txt")
1000
+ create_file(filename, transcript, response, should_save)
1001
+
1002
+ # Whisper to Llama:
1003
+ response = StreamLLMChatResponse(transcript)
1004
+ filename_txt = generate_filename(transcript, "md")
1005
+ create_file(filename_txt, transcript, response, should_save)
1006
+ filename_wav = filename_txt.replace('.txt', '.wav')
1007
+ import shutil
1008
+ try:
1009
+ if os.path.exists(filename):
1010
+ shutil.copyfile(filename, filename_wav)
1011
+ except:
1012
+ st.write('.')
1013
+ if os.path.exists(filename):
1014
+ os.remove(filename)
1015
+
1016
+
1017
+
1018
+ # Sample function to demonstrate a response, replace with your own logic
1019
+ def StreamMedChatResponse(topic):
1020
+ st.write(f"Showing resources or questions related to: {topic}")
1021
+
1022
+
1023
+
1024
+ # 17. Main
1025
+ def main():
1026
+ prompt = PromptPrefix2
1027
+ with st.expander("Prompts 📚", expanded=False):
1028
+ example_input = st.text_input("Enter your prompt text:", value=prompt, help="Enter text to get a response.")
1029
+ if st.button("Run Prompt", help="Click to run."):
1030
+ try:
1031
+ response=StreamLLMChatResponse(example_input)
1032
+ create_file(filename, example_input, response, should_save)
1033
+ except:
1034
+ st.write('model is asleep. Starting now on A10 GPU. Please wait one minute then retry. KEDA triggered.')
1035
+ openai.api_key = os.getenv('OPENAI_API_KEY')
1036
+ if openai.api_key == None: openai.api_key = st.secrets['OPENAI_API_KEY']
1037
+ menu = ["txt", "htm", "xlsx", "csv", "md", "py"]
1038
+ choice = st.sidebar.selectbox("Output File Type:", menu)
1039
+ model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301'))
1040
+ user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100)
1041
+ collength, colupload = st.columns([2,3]) # adjust the ratio as needed
1042
+ with collength:
1043
+ max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000)
1044
+ with colupload:
1045
+ uploaded_file = st.file_uploader("Add a file for context:", type=["pdf", "xml", "json", "xlsx", "csv", "html", "htm", "md", "txt"])
1046
+ document_sections = deque()
1047
+ document_responses = {}
1048
+ if uploaded_file is not None:
1049
+ file_content = read_file_content(uploaded_file, max_length)
1050
+ document_sections.extend(divide_document(file_content, max_length))
1051
+ if len(document_sections) > 0:
1052
+ if st.button("👁️ View Upload"):
1053
+ st.markdown("**Sections of the uploaded file:**")
1054
+ for i, section in enumerate(list(document_sections)):
1055
+ st.markdown(f"**Section {i+1}**\n{section}")
1056
+ st.markdown("**Chat with the model:**")
1057
+ for i, section in enumerate(list(document_sections)):
1058
+ if i in document_responses:
1059
+ st.markdown(f"**Section {i+1}**\n{document_responses[i]}")
1060
+ else:
1061
+ if st.button(f"Chat about Section {i+1}"):
1062
+ st.write('Reasoning with your inputs...')
1063
+ #response = chat_with_model(user_prompt, section, model_choice)
1064
+ st.write('Response:')
1065
+ st.write(response)
1066
+ document_responses[i] = response
1067
+ filename = generate_filename(f"{user_prompt}_section_{i+1}", choice)
1068
+ create_file(filename, user_prompt, response, should_save)
1069
+ st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
1070
+ if st.button('💬 Chat'):
1071
+ st.write('Reasoning with your inputs...')
1072
+ user_prompt_sections = divide_prompt(user_prompt, max_length)
1073
+ full_response = ''
1074
+ for prompt_section in user_prompt_sections:
1075
+ response = chat_with_model(prompt_section, ''.join(list(document_sections)), model_choice)
1076
+ full_response += response + '\n' # Combine the responses
1077
+ response = full_response
1078
+ st.write('Response:')
1079
+ st.write(response)
1080
+ filename = generate_filename(user_prompt, choice)
1081
+ create_file(filename, user_prompt, response, should_save)
1082
+
1083
+
1084
+ # Function to encode file to base64
1085
+ def get_base64_encoded_file(file_path):
1086
+ with open(file_path, "rb") as file:
1087
+ return base64.b64encode(file.read()).decode()
1088
+
1089
+ # Function to create a download link
1090
+ def get_audio_download_link(file_path):
1091
+ base64_file = get_base64_encoded_file(file_path)
1092
+ return f'<a href="data:file/wav;base64,{base64_file}" download="{os.path.basename(file_path)}">⬇️ Download Audio</a>'
1093
+
1094
+ # Compose a file sidebar of past encounters
1095
+ all_files = glob.glob("*.wav")
1096
+ all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 10] # exclude files with short names
1097
+ all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order
1098
+
1099
+ filekey = 'delall'
1100
+ if st.sidebar.button("🗑 Delete All Audio", key=filekey):
1101
+ for file in all_files:
1102
+ os.remove(file)
1103
+ st.experimental_rerun()
1104
+
1105
+ for file in all_files:
1106
+ col1, col2 = st.sidebar.columns([6, 1]) # adjust the ratio as needed
1107
+ with col1:
1108
+ st.markdown(file)
1109
+ if st.button("🎵", key="play_" + file): # play emoji button
1110
+ audio_file = open(file, 'rb')
1111
+ audio_bytes = audio_file.read()
1112
+ st.audio(audio_bytes, format='audio/wav')
1113
+ #st.markdown(get_audio_download_link(file), unsafe_allow_html=True)
1114
+ #st.text_input(label="", value=file)
1115
+ with col2:
1116
+ if st.button("🗑", key="delete_" + file):
1117
+ os.remove(file)
1118
+ st.experimental_rerun()
1119
+
1120
+
1121
+
1122
+ GiveFeedback=False
1123
+ if GiveFeedback:
1124
+ with st.expander("Give your feedback 👍", expanded=False):
1125
+ feedback = st.radio("Step 8: Give your feedback", ("👍 Upvote", "👎 Downvote"))
1126
+ if feedback == "👍 Upvote":
1127
+ st.write("You upvoted 👍. Thank you for your feedback!")
1128
+ else:
1129
+ st.write("You downvoted 👎. Thank you for your feedback!")
1130
+ load_dotenv()
1131
+ st.write(css, unsafe_allow_html=True)
1132
+ st.header("Chat with documents :books:")
1133
+ user_question = st.text_input("Ask a question about your documents:")
1134
+ if user_question:
1135
+ process_user_input(user_question)
1136
+ with st.sidebar:
1137
+ st.subheader("Your documents")
1138
+ docs = st.file_uploader("import documents", accept_multiple_files=True)
1139
+ with st.spinner("Processing"):
1140
+ raw = pdf2txt(docs)
1141
+ if len(raw) > 0:
1142
+ length = str(len(raw))
1143
+ text_chunks = txt2chunks(raw)
1144
+ vectorstore = vector_store(text_chunks)
1145
+ st.session_state.conversation = get_chain(vectorstore)
1146
+ st.markdown('# AI Search Index of Length:' + length + ' Created.') # add timing
1147
+ filename = generate_filename(raw, 'txt')
1148
+ create_file(filename, raw, '', should_save)
1149
+
1150
+
1151
+ try:
1152
+ query_params = st.query_params
1153
+ query = (query_params.get('q') or query_params.get('query') or [''])
1154
+ if query: search_glossary(query)
1155
+ except:
1156
+ st.markdown(' ')
1157
+
1158
+ # Display the glossary grid
1159
+ st.markdown("### 🎲🗺️ Word Game Gallery")
1160
+
1161
+ display_videos_and_links() # Video Jump Grid
1162
+ display_images_and_wikipedia_summaries() # Image Jump Grid
1163
+ display_glossary_grid(roleplaying_glossary) # Word Glossary Jump Grid
1164
+ display_buttons_with_scores() # Feedback Jump Grid
1165
+
1166
+ if 'action' in st.query_params:
1167
+ action = st.query_params()['action'][0] # Get the first (or only) 'action' parameter
1168
+ if action == 'show_message':
1169
+ st.success("Showing a message because 'action=show_message' was found in the URL.")
1170
+ elif action == 'clear':
1171
+ clear_query_params()
1172
+ st.experimental_rerun()
1173
+
1174
+ # Handling repeated keys
1175
+ if 'multi' in st.query_params:
1176
+ multi_values = get_all_query_params('multi')
1177
+ st.write("Values for 'multi':", multi_values)
1178
+
1179
+ # Manual entry for demonstration
1180
+ st.write("Enter query parameters in the URL like this: ?action=show_message&multi=1&multi=2")
1181
+
1182
+ if 'query' in st.query_params:
1183
+ query = st.query_params['query'][0] # Get the query parameter
1184
+ # Display content or image based on the query
1185
+ display_content_or_image(query)
1186
+
1187
+ # Add a clear query parameters button for convenience
1188
+ if st.button("Clear Query Parameters", key='ClearQueryParams'):
1189
+ # This will clear the browser URL's query parameters
1190
+ st.experimental_set_query_params
1191
+ st.experimental_rerun()
1192
+
1193
+ # 18. Run AI Pipeline
1194
+ if __name__ == "__main__":
1195
+ whisper_main()
1196
+ main()