KevinHuSh commited on
Commit
3859fce
·
1 Parent(s): 3198faf

rm wrongly uploaded folder (#24)

Browse files
python/Dockerfile DELETED
@@ -1,29 +0,0 @@
1
- FROM ubuntu:22.04 as base
2
-
3
- RUN apt-get update
4
-
5
- ENV TZ="Asia/Taipei"
6
- RUN apt-get install -yq \
7
- build-essential \
8
- curl \
9
- libncursesw5-dev \
10
- libssl-dev \
11
- libsqlite3-dev \
12
- libgdbm-dev \
13
- libc6-dev \
14
- libbz2-dev \
15
- software-properties-common \
16
- python3.11 python3.11-dev python3-pip
17
-
18
- RUN apt-get install -yq git
19
- RUN pip3 config set global.index-url https://mirror.baidu.com/pypi/simple
20
- RUN pip3 config set global.trusted-host mirror.baidu.com
21
- RUN pip3 install --upgrade pip
22
- RUN pip3 install torch==2.0.1
23
- RUN pip3 install torch-model-archiver==0.8.2
24
- RUN pip3 install torchvision==0.15.2
25
- COPY requirements.txt .
26
-
27
- WORKDIR /docgpt
28
- ENV PYTHONPATH=/docgpt/
29
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
python/ToPDF.pdf DELETED
File without changes
python/] DELETED
@@ -1,63 +0,0 @@
1
- from abc import ABC
2
- from openai import OpenAI
3
- import os
4
- import base64
5
- from io import BytesIO
6
-
7
- class Base(ABC):
8
- def describe(self, image, max_tokens=300):
9
- raise NotImplementedError("Please implement encode method!")
10
-
11
-
12
- class GptV4(Base):
13
- def __init__(self):
14
- import openapi
15
- openapi.api_key = os.environ["OPENAPI_KEY"]
16
- self.client = OpenAI()
17
-
18
- def describe(self, image, max_tokens=300):
19
- buffered = BytesIO()
20
- try:
21
- image.save(buffered, format="JPEG")
22
- except Exception as e:
23
- image.save(buffered, format="PNG")
24
- b64 = base64.b64encode(buffered.getvalue()).decode("utf-8")
25
-
26
- res = self.client.chat.completions.create(
27
- model="gpt-4-vision-preview",
28
- messages=[
29
- {
30
- "role": "user",
31
- "content": [
32
- {
33
- "type": "text",
34
- "text": "请用中文详细描述一下图中的内容,比如时间,地点,人物,事情,人物心情等。",
35
- },
36
- {
37
- "type": "image_url",
38
- "image_url": {
39
- "url": f"data:image/jpeg;base64,{b64}"
40
- },
41
- },
42
- ],
43
- }
44
- ],
45
- max_tokens=max_tokens,
46
- )
47
- return res.choices[0].message.content.strip()
48
-
49
-
50
- class QWen(Base):
51
- def chat(self, system, history, gen_conf):
52
- from http import HTTPStatus
53
- from dashscope import Generation
54
- from dashscope.api_entities.dashscope_response import Role
55
- # export DASHSCOPE_API_KEY=YOUR_DASHSCOPE_API_KEY
56
- response = Generation.call(
57
- Generation.Models.qwen_turbo,
58
- messages=messages,
59
- result_format='message'
60
- )
61
- if response.status_code == HTTPStatus.OK:
62
- return response.output.choices[0]['message']['content']
63
- return response.message
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
python/output/ToPDF.pdf DELETED
File without changes
python/res/1-0.tm DELETED
@@ -1,8 +0,0 @@
1
- 2023-12-20 11:44:08.791336+00:00
2
- 2023-12-20 11:44:08.853249+00:00
3
- 2023-12-20 11:44:08.909933+00:00
4
- 2023-12-21 00:47:09.996757+00:00
5
- 2023-12-20 11:44:08.965855+00:00
6
- 2023-12-20 11:44:09.011682+00:00
7
- 2023-12-21 00:47:10.063326+00:00
8
- 2023-12-20 11:44:09.069486+00:00
 
 
 
 
 
 
 
 
 
python/res/thumbnail-1-0.tm DELETED
@@ -1,3 +0,0 @@
1
- 2023-12-27 08:21:49.309802+00:00
2
- 2023-12-27 08:37:22.407772+00:00
3
- 2023-12-27 08:59:18.845627+00:00
 
 
 
 
python/tmp.log DELETED
@@ -1,15 +0,0 @@
1
-
2
- ----------- Model Configuration -----------
3
- Model Arch: GFL
4
- Transform Order:
5
- --transform op: Resize
6
- --transform op: NormalizeImage
7
- --transform op: Permute
8
- --transform op: PadStride
9
- --------------------------------------------
10
- Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
11
- The `max_size` parameter is deprecated and will be removed in v4.26. Please specify in `size['longest_edge'] instead`.
12
- Some weights of the model checkpoint at microsoft/table-transformer-structure-recognition were not used when initializing TableTransformerForObjectDetection: ['model.backbone.conv_encoder.model.layer3.0.downsample.1.num_batches_tracked', 'model.backbone.conv_encoder.model.layer2.0.downsample.1.num_batches_tracked', 'model.backbone.conv_encoder.model.layer4.0.downsample.1.num_batches_tracked']
13
- - This IS expected if you are initializing TableTransformerForObjectDetection from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
14
- - This IS NOT expected if you are initializing TableTransformerForObjectDetection from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
15
- WARNING:root:The files are stored in /opt/home/kevinhu/docgpt/, please check it!