You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+88-48Lines changed: 88 additions & 48 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,41 +5,105 @@ Files can be uploaded from local machine or S3 bucket and then LLM model can be
5
5
6
6
### Getting started
7
7
8
-
:warning:
9
-
For the backend, if you want to run the LLM KG Builder locally, and don't need the GCP/VertexAI integration, make sure to have the following set in your ENV file :
8
+
:warning: You will need to have a Neo4j Database V5.15 or later with [APOC installed](https://neo4j.com/docs/apoc/current/installation/) to use this Knowledge Graph Builder.
9
+
You can use any [Neo4j Aura database](https://neo4j.com/aura/) (including the free database)
10
+
If you are using Neo4j Desktop, you will not be able to use the docker-compose but will have to follow the [separate deployment of backend and frontend section](#running-backend-and-frontend-separately-dev-environment). :warning:
10
11
12
+
### Deploy locally
13
+
#### Running through docker-compose
14
+
By default only OpenAI and Diffbot are enabled since Gemini requires extra GCP configurations.
15
+
16
+
In your root folder, create a .env file with your OPENAI and DIFFBOT keys (if you want to use both):
17
+
```env
18
+
OPENAI_API_KEY="your-openai-key"
19
+
DIFFBOT_API_KEY="your-diffbot-key"
20
+
```
21
+
22
+
if you only want OpenAI:
23
+
```env
24
+
LLM_MODELS="OpenAI GPT 3.5,OpenAI GPT 4o"
25
+
OPENAI_API_KEY="your-openai-key"
26
+
```
27
+
28
+
if you only want Diffbot:
29
+
```env
30
+
LLM_MODELS="Diffbot"
31
+
DIFFBOT_API_KEY="your-diffbot-key"
32
+
```
33
+
34
+
You can then run Docker Compose to build and start all components:
35
+
```bash
36
+
docker-compose up --build
37
+
```
38
+
39
+
##### Additional configs
40
+
41
+
By default, the input sources will be: Local files, Youtube, Wikipedia and AWS S3. As this default config is applied:
11
42
```env
12
-
GEMINI_ENABLED = False
13
-
GCP_LOG_METRICS_ENABLED = False
43
+
REACT_APP_SOURCES="local,youtube,wiki,s3"
14
44
```
15
45
16
-
And for the frontend, make sure to export your local backend URL before running docker-compose by having the BACKEND_API_URL set in your ENV file :
46
+
If however you want the Google GCS integration, add `gcs` and your Google client ID:
17
47
```env
18
-
BACKEND_API_URL="http://localhost:8000"
48
+
REACT_APP_SOURCES="local,youtube,wiki,s3,gcs"
49
+
GOOGLE_CLIENT_ID="xxxx"
19
50
```
20
51
21
-
1. Run Docker Compose to build and start all components:
52
+
You can of course combine all (local, youtube, wikipedia, s3 and gcs) or remove any you don't want/need.
53
+
54
+
55
+
#### Running Backend and Frontend separately (dev environment)
56
+
Alternatively, you can run the backend and frontend separately:
57
+
58
+
- For the frontend:
59
+
1. Create the frontend/.env file by copy/pasting the frontend/example.env.
60
+
2. Change values as needed
61
+
3.
22
62
```bash
23
-
docker-compose up --build
63
+
cd frontend
64
+
yarn
65
+
yarn run dev
24
66
```
25
67
26
-
2. Alternatively, you can run specific directories separately:
27
-
28
-
- For the frontend:
29
-
```bash
30
-
cd frontend
31
-
yarn
32
-
yarn run dev
33
-
```
68
+
- For the backend:
69
+
1. Create the backend/.env file by copy/pasting the backend/example.env.
70
+
2. Change values as needed
71
+
3.
72
+
```bash
73
+
cd backend
74
+
python -m venv envName
75
+
source envName/bin/activate
76
+
pip install -r requirements.txt
77
+
uvicorn score:app --reload
78
+
```
79
+
### ENV
80
+
| Env Variable Name | Mandatory/Optional | Default Value | Description |
| REACT_APP_SOURCES | Optional | local,youtube,wiki,s3 | List of input sources that will be available |
101
+
| LLM_MODELS | Optional | Diffbot,OpenAI GPT 3.5,OpenAI GPT 4o | Models available for selection on the frontend, used for entities extraction and Q&A Chatbot |
102
+
| ENV | Optional | DEV | Environment variable for the app |
103
+
| TIME_PER_CHUNK | Optional | 4 | Time per chunk for processing |
104
+
| CHUNK_SIZE | Optional | 5242880 | Size of each chunk for processing |
105
+
| GOOGLE_CLIENT_ID | Optional || Client ID for Google authentication |
34
106
35
-
- For the backend:
36
-
```bash
37
-
cd backend
38
-
python -m venv envName
39
-
source envName/bin/activate
40
-
pip install -r requirements.txt
41
-
uvicorn score:app --reload
42
-
```
43
107
44
108
###
45
109
To deploy the app and packages on Google Cloud Platform, run the following command on google cloud run:
0 commit comments