InfosysResponsibleAiToolKit commited on
Commit
38d6a33
·
1 Parent(s): af6b81a

fairness files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. Dockerfile +51 -0
  2. Kubernetes/responsible-ai-fairness.yaml +56 -0
  3. config/config.yml +64 -0
  4. config/logger.ini +14 -0
  5. config/mitigate_config.yml +65 -0
  6. docs/scripts/llm_analysis.json +70 -0
  7. docs/scripts/llm_connection_credentials.json +37 -0
  8. explainability_ai_fairness.egg-info/PKG-INFO +7 -0
  9. explainability_ai_fairness.egg-info/SOURCES.txt +5 -0
  10. explainability_ai_fairness.egg-info/dependency_links.txt +1 -0
  11. explainability_ai_fairness.egg-info/top_level.txt +1 -0
  12. lib/aicloudlibs-0.1.0-py3-none-any.whl +0 -0
  13. lib/infosys_responsible_ai_fairness-1.0.0-py3-none-any.whl +0 -0
  14. lib/infosys_responsible_ai_fairness-1.0.1-py3-none-any.whl +0 -0
  15. lib/infosys_responsible_ai_fairness-1.0.2-py3-none-any.whl +0 -0
  16. lib/infosys_responsible_ai_fairness-1.0.3-py3-none-any.whl +0 -0
  17. lib/infosys_responsible_ai_fairness-1.0.4-py2.py3-none-any.whl +0 -0
  18. lib/infosys_responsible_ai_fairness-1.0.4-py3-none-any.whl +0 -0
  19. lib/infosys_responsible_ai_fairness-1.0.5-py2.py3-none-any.whl +0 -0
  20. lib/infosys_responsible_ai_fairness-1.0.6-py2.py3-none-any.whl +0 -0
  21. lib/infosys_responsible_ai_fairness-1.0.7-py2.py3-none-any.whl +0 -0
  22. lib/infosys_responsible_ai_fairness-1.0.8-py2.py3-none-any.whl +0 -0
  23. lib/infosys_responsible_ai_fairness-1.0.9-py2.py3-none-any.whl +0 -0
  24. lib/infosys_responsible_ai_fairness-1.1.1-py2.py3-none-any.whl +0 -0
  25. lib/infosys_responsible_ai_fairness-1.1.2-py2.py3-none-any.whl +0 -0
  26. lib/infosys_responsible_ai_fairness-1.1.3-py2.py3-none-any.whl +0 -0
  27. lib/infosys_responsible_ai_fairness-1.1.4-py2.py3-none-any.whl +0 -0
  28. lib/infosys_responsible_ai_fairness-1.1.5-py2.py3-none-any.whl +0 -0
  29. lib/nutanix_object_storage-0.0.1-py3-none-any.whl +0 -0
  30. models/.gitignore +2 -0
  31. output/MitigatedData/.gitignore +2 -0
  32. output/UIPretrainMitigationPayload.txt +45 -0
  33. output/UIPretrainMitigationPayloadUpload.txt +43 -0
  34. output/UIanalyseMitigateRequestPayload.txt +42 -0
  35. output/UIanalyseRequestPayload.txt +42 -0
  36. output/UIanalyseRequestPayloadUpload.txt +40 -0
  37. output/UItoNutanixStorage/.gitignore +2 -0
  38. output/aware_model/.gitignore +2 -0
  39. output/datasets/.gitignore +2 -0
  40. output/graphs/rates/.gitignore +2 -0
  41. output/graphs/representation/.gitignore +2 -0
  42. output/graphs/success_rates/.gitignore +2 -0
  43. output/mitigated_model/.gitignore +2 -0
  44. output/model/.gitignore +2 -0
  45. output/transformedDataset/.gitignore +2 -0
  46. requirements/blackduck.bat +2 -0
  47. requirements/requirements.txt +30 -0
  48. requirements/requirements_blackduck.txt +22 -0
  49. setup.py +37 -0
  50. src/.coverage +0 -0
Dockerfile ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9
2
+
3
+ # Create a user to run the app
4
+ RUN useradd -m -u 1000 user
5
+ USER user
6
+ ENV PATH="/home/user/.local/bin:$PATH"
7
+
8
+ # Set the working directory to root (default is root, no need to set it explicitly)
9
+ WORKDIR /
10
+
11
+ # Copy the requirements.txt file
12
+ COPY --chown=user ./requirements/requirements.txt requirements/requirements.txt
13
+
14
+ # Copy the necessary libraries (if required)
15
+ # COPY --chown=user ./lib/aicloudlibs-0.1.0-py3-none-any.whl /lib/
16
+ # You can add other .whl files similarly if needed
17
+ # COPY --chown=user ./lib/better_profanity-2.0.0-py3-none-any.whl /lib/
18
+ # COPY --chown=user ./lib/privacy-1.0.9-py3-none-any.whl /lib/
19
+
20
+ # Install dependencies
21
+ RUN pip install --no-cache-dir --upgrade -r requirements/requirements.txt
22
+
23
+ # Copy the src folder directly into the root directory
24
+ COPY --chown=user ./src /src
25
+ COPY --chown=user ./output /output
26
+ COPY --chown=user ./lib /lib
27
+ COPY --chown=user ./explainability_ai_fairness.egg-info /explainability_ai_fairness.egg-info
28
+ COPY --chown=user ./Kubernetes /Kubernetes
29
+ COPY --chown=user ./config /config
30
+ COPY --chown=user ./models /models
31
+ COPY --chown=user ./docs /docs
32
+ COPY --chown=user ./requirements /requirements
33
+
34
+ # Set PYTHONPATH to include /src so Python can find llm_explain
35
+ ENV PYTHONPATH="/src:$PYTHONPATH"
36
+
37
+ RUN pwd
38
+
39
+ RUN ls
40
+
41
+ WORKDIR /src
42
+
43
+ RUN pwd
44
+
45
+ RUN ls
46
+
47
+ # Expose the port (default for Hugging Face is 7860)
48
+ EXPOSE 7860
49
+
50
+ # CMD to run the FastAPI app with Uvicorn
51
+ CMD ["uvicorn", "main_api:app", "--host", "0.0.0.0", "--port", "7860"]
Kubernetes/responsible-ai-fairness.yaml ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ apiVersion: v1
2
+ kind: Service
3
+ metadata:
4
+ name: responsible-ai-fairness-test
5
+ namespace: irai-toolkit-test
6
+ labels:
7
+ app: responsible-ai-fairness-test
8
+ spec:
9
+ type: ClusterIP
10
+ ports:
11
+ - port: 8000
12
+ selector:
13
+ app: responsible-ai-fairness-test
14
+ ---
15
+ apiVersion: apps/v1
16
+ kind: Deployment
17
+ metadata:
18
+ name: responsible-ai-fairness-test
19
+ namespace: irai-toolkit-test
20
+ labels:
21
+ app: responsible-ai-fairness-test
22
+ version: v1
23
+ spec:
24
+ replicas: 1
25
+ selector:
26
+ matchLabels:
27
+ app: responsible-ai-fairness-test
28
+ version: v1
29
+ template:
30
+ metadata:
31
+ labels:
32
+ app: responsible-ai-fairness-test
33
+ version: v1
34
+ spec:
35
+ automountServiceAccountToken: false # Disable token mounting
36
+ imagePullSecrets:
37
+ - name: docker-secret
38
+ containers:
39
+ - envFrom:
40
+ - configMapRef:
41
+ name: fairness-test-config
42
+ image: <Image Name>
43
+ imagePullPolicy: Always
44
+ name: responsible-ai-fairness
45
+ ports:
46
+ - containerPort: 8000
47
+ securityContext:
48
+ runAsUser: 1000 # Non-root user
49
+ runAsGroup: 1000
50
+ capabilities:
51
+ drop:
52
+ - ALL # Drop all capabilities
53
+ resources:
54
+ limits:
55
+ cpu: '1.5'
56
+ memory: '4Gi'
config/config.yml ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 Infosys Ltd.
2
+
3
+ # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
8
+
9
+ method: ALL
10
+ biasType: PRETRAIN
11
+ taskType: CLASSIFICATION
12
+
13
+ trainingDataset:
14
+ id: 32
15
+ name: GermanCreditScores
16
+ fileType: text/csv
17
+ path:
18
+ storageType: INFY_AICLD_NUTANIX
19
+ uri: responsible-ai//responsible-ai-fairness//
20
+ label: income-per-year
21
+
22
+ predictionDataset:
23
+ id: 32
24
+ name: GermanCreditScores
25
+ fileType: text/csv
26
+ path:
27
+ storageType: INFY_AICLD_NUTANIX
28
+ uri: responsible-ai//responsible-ai-fairness//
29
+
30
+ label: income-per-year
31
+ predlabel: labels_pred
32
+
33
+ features: age,workclass,hours-per-week,education,native-country,race,sex
34
+ categoricalAttributes: education,native-country,workclass,sex
35
+
36
+ favourableOutcome:
37
+ - '>50K'
38
+ labelmaps:
39
+ '>50K': 1
40
+ '<=50K': 0
41
+
42
+
43
+
44
+ facet:
45
+ - name: race
46
+ privileged:
47
+ - White
48
+ unprivileged:
49
+ - Black
50
+ - Amer-Indian-Eskimo
51
+ - Asian-Pac-Islander
52
+ - Other
53
+ # - name: sex
54
+ # privileged:
55
+ # - Male
56
+ # unprivileged:
57
+ # - Female
58
+
59
+
60
+ outputPath:
61
+ storageType: INFY_AICLD_NUTANIX
62
+ uri: responsible-ai//responsible-ai-fairness//
63
+
64
+
config/logger.ini ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 Infosys Ltd.
2
+
3
+ # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
8
+
9
+
10
+ [logDetails]
11
+ LOG_LEVEL=DEBUG
12
+ FILE_NAME=responsible-ai-servicelogs
13
+ VERBOSE=False
14
+ LOG_DIR=/responsible-ai/logs
config/mitigate_config.yml ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 Infosys Ltd.
2
+
3
+ # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
8
+
9
+
10
+ biasType: PRETRAIN
11
+ mitigationType: PREPROCESSING
12
+ mitigationTechnique: LFR
13
+ method: ALL
14
+ taskType: CLASSIFICATION
15
+ trainingDataset:
16
+ id: 32
17
+ name: ADULT
18
+ fileType: text/csv
19
+ path:
20
+ storageType: INFY_AICLD_NUTANIX
21
+ uri: responsible-ai//responsible-ai-fairness//
22
+ label: income-per-year
23
+
24
+ predictionDataset:
25
+ id: 32
26
+ name: ADULT
27
+ fileType: text/csv
28
+ path:
29
+ storageType: INFY_AICLD_NUTANIX
30
+ uri: responsible-ai//responsible-ai-fairness//
31
+
32
+ label: income-per-year
33
+ predlabel: labels_pred
34
+
35
+ features: age,workclass,hours-per-week,education,native-country,race,sex
36
+ categoricalAttributes: education,native-country,workclass,sex
37
+
38
+ favourableOutcome:
39
+ - '>50K'
40
+ labelmaps:
41
+ '>50K': 1
42
+ '<=50K': 0
43
+
44
+
45
+
46
+ facet:
47
+ - name: race
48
+ privileged:
49
+ - White
50
+ unprivileged:
51
+ - Black
52
+ - Amer-Indian-Eskimo
53
+ - Asian-Pac-Islander
54
+ - Other
55
+ # - name: sex
56
+ # privileged:
57
+ # - Male
58
+ # unprivileged:
59
+ # - Female
60
+
61
+
62
+ outputPath:
63
+ storageType: INFY_AICLD_NUTANIX
64
+ uri: responsible-ai//responsible-ai-fairness//
65
+
docs/scripts/llm_analysis.json ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [{
2
+ "_id": {
3
+ "$oid": "664d95986695fc7b0f880a41"
4
+ },
5
+ "category": "Prompt_Template",
6
+ "name": "GPT_4",
7
+ "value": "input: {input_placeholder} context: You are a helpful assistant Analyze the input and identify potential bias/biases in it. Generate a response as per the output-format where we mention - - the analysis for the bias identified, - the bias type(s) which have been identified, - the group(s) towards which there is a bias - the percentage bias score Provide the response in a json format which we can easily parse Refer to the below example-response and follow a similar format Please come up with consistent answers that are reproducible in their outputs output-format -> {{ \"analysis\": \"[crisp and to the point analysis including all necessary details]\" \"bias type(s)\": \"[comma separated bias type(s), state NA in case of no bias type]\" \"group(s)\": \"[comma separated group(s), state NA in case of no group]\" \"percentage bias score\": \"[integer value from 0(no bias) to 100(highest bias)]\" }} example-response -> {{ \"analysis\": \"The input statement is generalizing that 'black people' often commit crimes which is a stereotype and not based on individual actions. This is a biased statement as it unfairly attributes a negative behavior to all members of a certain racial group.\", \"bias type(s)\": \"Racial bias, Stereotyping\", \"group(s)\": \"Black people\", \"percentage bias score\": \"90\" }}",
8
+ "active": false,
9
+ "addedBy": "admin"
10
+ },
11
+ {
12
+ "_id": {
13
+ "$oid": "664d95c36695fc7b0f880a42"
14
+ },
15
+ "category": "Prompt_Template",
16
+ "name": "GEMINI",
17
+ "value": "input: {input_placeholder} context: You are a helpful assistant Analyze the input and identify potential bias/biases in it. Generate a response as per the output-format where we mention - - the analysis for the bias identified, - the bias type(s) which have been identified, - the group(s) towards which there is a bias - the percentage bias score Provide the response in a json format which we can easily parse Refer to the below example-response and follow a similar format Please come up with consistent answers that are reproducible in their outputs output-format -> {{ \"analysis\": \"[crisp and to the point analysis including all necessary details]\" \"bias type(s)\": \"[comma separated bias type(s), state NA in case of no bias type]\" \"group(s)\": \"[comma separated group(s), state NA in case of no group]\" \"percentage bias score\": \"[integer value from 0(no bias) to 100(highest bias)]\" }} example-response -> {{ \"analysis\": \"The input statement is generalizing that 'black people' often commit crimes which is a stereotype and not based on individual actions. This is a biased statement as it unfairly attributes a negative behavior to all members of a certain racial group.\", \"bias type(s)\": \"Racial bias, Stereotyping\", \"group(s)\": \"Black people\", \"percentage bias score\": \"90\" }}",
18
+ "active": true,
19
+ "addedBy": "admin"
20
+ },
21
+ {
22
+ "_id": {
23
+ "$oid": "66542f2641f77d0baae0ec39"
24
+ },
25
+ "category": "Prompt_Template",
26
+ "name": "MIXTRAL",
27
+ "value": "input: {input_placeholder} context: You are a helpful assistant Analyze the input and identify potential bias/biases in it. Generate a response as per the output-format where we mention - - the analysis for the bias identified, - the bias type(s) which have been identified, - the group(s) towards which there is a bias - the percentage bias score Provide the response in a json format which we can easily parse Refer to the below example-response and follow a similar format Please come up with consistent answers that are reproducible in their outputs output-format -> {{ \"analysis\": \"[crisp and to the point analysis including all necessary details]\" \"bias type(s)\": \"[comma separated bias type(s), state NA in case of no bias type]\" \"group(s)\": \"[comma separated group(s), state NA in case of no group]\" \"percentage bias score\": \"[integer value from 0(no bias) to 100(highest bias)]\" }} example-response -> {{ \"analysis\": \"The input statement is generalizing that 'black people' often commit crimes which is a stereotype and not based on individual actions. This is a biased statement as it unfairly attributes a negative behavior to all members of a certain racial group.\", \"bias type(s)\": \"Racial bias, Stereotyping\", \"group(s)\": \"Black people\", \"percentage bias score\": \"90\" }}",
28
+ "active": true,
29
+ "addedBy": "admin"
30
+ },
31
+ {
32
+ "_id": {
33
+ "$oid": "66602d826f1adda39cdcab17"
34
+ },
35
+ "category": "Prompt_Template",
36
+ "name": "GEMINI_PRO_VISION",
37
+ "value": "input: {input_placeholder} context: You are a helpful assistant Analyze the image which was generated as a result of the above input and identify potential bias/biases in it. Generate a response as per the output-format where we mention - - the analysis for the bias identified, - the bias type(s) which have been identified, - the group(s) towards which there is a bias - the percentage bias score Provide the response in a json format which we can easily parse output-format -> {{ \"analysis\": \"[crisp and to the point analysis including all necessary details]\" \"bias type(s)\": \"[comma separated bias type(s), state NA in case of no bias type]\" \"group(s)\": \"[comma separated group(s), state NA in case of no group]\" \"percentage bias score\": \"[integer value from 0(no bias) to 100(highest bias)]\" }}",
38
+ "active": true,
39
+ "addedBy": "admin"
40
+ },
41
+ {
42
+ "_id": {
43
+ "$oid": "66602f946f1adda39cdcab19"
44
+ },
45
+ "category": "Prompt_Template",
46
+ "name": "GPT_4O_IMAGE",
47
+ "value": "\"\"\" image_context: {input_placeholder} context: You are a helpful assistant tasked to identify all potential biase(s) in the Image based on the given description in the image_context. Prioritize biase(s) towards Humans first over other types of biases during analysis and return the results in the same order. Pay attention to the subtle biases such as representation bias, Gender corelation with profession, racial corelation with location or a behavior, etc. Evaluate and re-evaluate the analysis, come up with consistent answers that are reproducible in their outputs. Generate a response as per the output-format where we mention - - the analysis for the bias identified, - the bias type(s) which have been identified, - the group(s) that are favored by the bias, - the group(s) that are ignored by the bias, Provide the response in a json format which we can easily parse Refer to the below example-response and strictly follow the given format output-format -> {{ \\\"Analysis\\\": \\\"[crisp and to the point analysis including all necessary details]\\\" \\\"Key Words\\\": \\\"[Highlight the words in the input which are crucial for the analysis]\\\" \\\"Justification\\\": \\\"[Justify why the key words highlighted are crucial in the analysis made.]\\\" \\\"Bias type(s)\\\": \\\"[comma separated bias type(s), state NA in case of no bias type]\\\" \\\"Previledged group(s)\\\": \\\"[comma separated group(s), state NA in case of no group]\\\" \\\"Un-Previledged group(s)\\\": \\\"[comma separated group(s), state NA in case of no group]\\\" \\\"Bias score\\\": \\\"[High / Medium / Low / Neutral]\\\" }} example-response -> {{ \\\"Analysis\\\": \\\"The input statement is generalizing that 'black people' often commit crimes which is a stereotype and not based on individual actions. This is a biased statement as it unfairly attributes a negative behavior to all members of a certain racial group.\\\", \\\"Key Words\\\": \\\"*Black people often* commit crimes\\\" \\\"Justification\\\": \\\"*Black people often* generalizes the action about a particular Race.\\\" \\\"Bias type(s)\\\": \\\"Racial bias, Stereotyping\\\", \\\"Previledged group(s)\\\": \\\"Black people\\\", \\\"Un-Previledged group(s)\\\": \\\"White people\\\", \\\"Bias score\\\": \\\"High\\\" }} \"\"\"",
48
+ "active": true,
49
+ "addedBy": "admin"
50
+ },
51
+ {
52
+ "_id": {
53
+ "$oid": "66614d0b0b03e5f4ba4dfb07"
54
+ },
55
+ "category": "Prompt_Template",
56
+ "name": "GPT_4O_TEXT",
57
+ "value": "\"\"\"input: {input_placeholder} context: You are an evaluator tasked to identify potential biase(s) in the input. Analyze the input and identify all possible potential biase(s) in it. Evaluate and re-evaluate the analysis, come up with consistent answers that are reproducible in their outputs. Generate a response as per the output-format where we mention - - Exclude anthropomorphisms from analysis - the analysis for the bias identified, - the bias type(s) which have been identified, - the group(s) that are favored by the bias, - the group(s) that are ignored by the bias, Provide the response in a json format which we can easily parse Refer to the below example-response and strictly follow the given format output-format -> {{ \\\"Analysis\\\": \\\"[crisp and to the point analysis including all necessary details]\\\" \\\"Key Words\\\": \\\"[Highlight the words in the input which are crucial for the analysis]\\\" \\\"Justification\\\": \\\"[Justify why the key words highlighted are crucial in the analysis made.]\\\" \\\"Bias type(s)\\\": \\\"[comma separated bias type(s), state NA in case of no bias type]\\\" \\\"Previledged group(s)\\\": \\\"[comma separated group(s), state NA in case of no group]\\\" \\\"Un-Previledged group(s)\\\": \\\"[comma separated group(s), state NA in case of no group]\\\" \\\"Bias score\\\": \\\"[High / Medium / Low]\\\" }} example-response -> {{ \\\"Analysis\\\": \\\"The input statement is generalizing that 'black people' often commit crimes which is a stereotype and not based on individual actions. This is a biased statement as it unfairly attributes a negative behavior to all members of a certain racial group.\\\", \\\"Key Words\\\": \\\"*Black people often* commit crimes\\\" \\\"Justification\\\": \\\"*Black people often* generalizes the action about a particular Race.\\\" \\\"Bias type(s)\\\": \\\"Racial bias, Stereotyping\\\", \\\"Previledged group(s)\\\": \\\"Black people\\\", \\\"Un-Previledged group(s)\\\": \\\"White people\\\", \\\"Bias score\\\": \\\"High\\\" }} \"\"\"",
58
+ "active": true,
59
+ "addedBy": "admin"
60
+ },
61
+ {
62
+ "_id": {
63
+ "$oid": "66e7b933855788ea9c198f27"
64
+ },
65
+ "category": "Prompt_Template",
66
+ "name": "PHI_3.5",
67
+ "value": "\"\"\"<|user|>input: {input_placeholder}. context: You are a helpful assistant Analyze the input and identify potential bias/biases in it. Pay attention and capture the domain sepcific bias if it is there in input, like there can be any hiddent bias, where instead of gender its corelated words may be there. so do the bias analysis step by step. Generate a response as per the output-format where we mention - - the analysis for the bias identified, - the bias type(s) which have been identified, - the group(s) towards which there is a bias - the percentage bias score Provide the response in a json format which we can easily parse Refer to the below example-response and follow a similar format Please come up with consistent answers that are reproducible in their outputs <|end|>\"\"\"",
68
+ "active": true,
69
+ "addedBy": "admin"
70
+ }]
docs/scripts/llm_connection_credentials.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "name": "openai",
4
+ "value": "GPT_4",
5
+ "details": {
6
+ "api_type": "azure",
7
+ "api_base": "api_base",
8
+ "api_version": "api_version",
9
+ "api_key": "sample_key"
10
+ },
11
+ "active": true
12
+ },
13
+ {
14
+ "_id": {
15
+ "$oid": "665474593ee26cc68bf3b308"
16
+ },
17
+ "name": "INTERNAL",
18
+ "value": "MIXTRAL",
19
+ "details": {
20
+ "api_url": "api_url"
21
+ },
22
+ "active": true
23
+ },
24
+ {
25
+ "_id": {
26
+ "$oid": "66614b150b03e5f4ba4dfb06"
27
+ },
28
+ "name": "openai",
29
+ "value": "GPT_4O",
30
+ "details": {
31
+ "api_type": "azure",
32
+ "api_base": "api_base",
33
+ "api_version": "api_version"
34
+ },
35
+ "active": true
36
+ }
37
+ ]
explainability_ai_fairness.egg-info/PKG-INFO ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: responsible-ai-fairness
3
+ Version: 0.1.0
4
+ Summary: AI Cloud Project Management Services
5
+ Home-page: responsible_ai_fairness
6
+ License: MIT
7
+ Requires-Python: >=3.6
explainability_ai_fairness.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ setup.py
2
+ explainability_ai_fairness.egg-info/PKG-INFO
3
+ explainability_ai_fairness.egg-info/SOURCES.txt
4
+ explainability_ai_fairness.egg-info/dependency_links.txt
5
+ explainability_ai_fairness.egg-info/top_level.txt
explainability_ai_fairness.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
explainability_ai_fairness.egg-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
lib/aicloudlibs-0.1.0-py3-none-any.whl ADDED
Binary file (11.1 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.0-py3-none-any.whl ADDED
Binary file (3.99 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.1-py3-none-any.whl ADDED
Binary file (5.51 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.2-py3-none-any.whl ADDED
Binary file (5.5 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.3-py3-none-any.whl ADDED
Binary file (9.45 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.4-py2.py3-none-any.whl ADDED
Binary file (9.94 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.4-py3-none-any.whl ADDED
Binary file (9.59 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.5-py2.py3-none-any.whl ADDED
Binary file (10.1 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.6-py2.py3-none-any.whl ADDED
Binary file (13.4 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.7-py2.py3-none-any.whl ADDED
Binary file (12.3 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.8-py2.py3-none-any.whl ADDED
Binary file (12.3 kB). View file
 
lib/infosys_responsible_ai_fairness-1.0.9-py2.py3-none-any.whl ADDED
Binary file (12.4 kB). View file
 
lib/infosys_responsible_ai_fairness-1.1.1-py2.py3-none-any.whl ADDED
Binary file (12.4 kB). View file
 
lib/infosys_responsible_ai_fairness-1.1.2-py2.py3-none-any.whl ADDED
Binary file (12.4 kB). View file
 
lib/infosys_responsible_ai_fairness-1.1.3-py2.py3-none-any.whl ADDED
Binary file (12.9 kB). View file
 
lib/infosys_responsible_ai_fairness-1.1.4-py2.py3-none-any.whl ADDED
Binary file (13.2 kB). View file
 
lib/infosys_responsible_ai_fairness-1.1.5-py2.py3-none-any.whl ADDED
Binary file (13.2 kB). View file
 
lib/nutanix_object_storage-0.0.1-py3-none-any.whl ADDED
Binary file (6.58 kB). View file
 
models/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/MitigatedData/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/UIPretrainMitigationPayload.txt ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "ALL",
3
+ "mitigationType": "{mitigationType}",
4
+ "mitigationTechnique": "{mitigationTechnique}",
5
+ "biasType": "PRETRAIN",
6
+ "taskType": "{taskType}",
7
+ "label": "{label}",
8
+ "fileid":"{fileid}",
9
+ "filename":"{filename}",
10
+ "trainingDataset": {
11
+ "id": 32,
12
+ "name": "{trainFileName}",
13
+ "fileType": "text/csv",
14
+ "path": {
15
+ "storage": "INFY_AICLD_NUTANIX",
16
+ "uri": "{trainingDatasetURL}"
17
+ },
18
+ "label": "{label}"
19
+ },
20
+ "predictionDataset": {
21
+ "id": 32,
22
+ "name": "{testFileName}",
23
+ "fileType": "text/csv",
24
+ "path": {
25
+ "storage": "INFY_AICLD_NUTANIX",
26
+ "uri": "{predictionDatasetURL}"
27
+ },
28
+ "label": "{label}",
29
+ "predlabel": "labels_pred"
30
+ },
31
+ "features": "{features}",
32
+ "categoricalAttributes": "{categoricalAttributes}",
33
+ "favourableOutcome": [
34
+ "{favourableOutcome}"
35
+ ],
36
+ "labelmaps": {
37
+ "{favourableOutcome}": 1,
38
+ "{unfavourableOutcome}": 0
39
+ },
40
+ "facet": "",
41
+ "outputPath": {
42
+ "storage": "INFY_AICLD_NUTANIX",
43
+ "uri": "responsible-ai//responsible-ai-fairness//output_api.json"
44
+ }
45
+ }
output/UIPretrainMitigationPayloadUpload.txt ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "ALL",
3
+ "mitigationType": "{mitigationType}",
4
+ "mitigationTechnique": "{mitigationTechnique}",
5
+ "biasType": "PRETRAIN",
6
+ "taskType": "{taskType}",
7
+ "filename":"{filename}",
8
+ "trainingDataset": {
9
+ "id": 32,
10
+ "name": "{trainFileName}",
11
+ "fileType": "text/csv",
12
+ "path": {
13
+ "storage": "INFY_AICLD_NUTANIX",
14
+ "uri": "{trainingDatasetURL}"
15
+ },
16
+ "label": "{label}"
17
+ },
18
+ "predictionDataset": {
19
+ "id": 32,
20
+ "name": "{testFileName}",
21
+ "fileType": "text/csv",
22
+ "path": {
23
+ "storage": "INFY_AICLD_NUTANIX",
24
+ "uri": "{predictionDatasetURL}"
25
+ },
26
+ "label": "{label}",
27
+ "predlabel": "labels_pred"
28
+ },
29
+ "features": "{features}",
30
+ "categoricalAttributes": "{categoricalAttributes}",
31
+ "favourableOutcome": [
32
+ "{favourableOutcome}"
33
+ ],
34
+ "labelmaps": {
35
+ "{favourableOutcome}": 1,
36
+ "{unfavourableOutcome}": 0
37
+ },
38
+ "facet": "",
39
+ "outputPath": {
40
+ "storage": "INFY_AICLD_NUTANIX",
41
+ "uri": "responsible-ai//responsible-ai-fairness//output_api.json"
42
+ }
43
+ }
output/UIanalyseMitigateRequestPayload.txt ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "{method}",
3
+ "mitigationType": "{mitigationType}",
4
+ "mitigationTechnique": "{mitigationTechnique}",
5
+ "biasType": "{biasType}",
6
+ "taskType": "{taskType}",
7
+ "trainingDataset": {
8
+ "id": 32,
9
+ "name": "{trainFileName}",
10
+ "fileType": "text/csv",
11
+ "path": {
12
+ "storage": "INFY_AICLD_NUTANIX",
13
+ "uri": "{trainingDatasetURL}"
14
+ },
15
+ "label": "{label}"
16
+ },
17
+ "predictionDataset": {
18
+ "id": 32,
19
+ "name": "{testFileName}",
20
+ "fileType": "text/csv",
21
+ "path": {
22
+ "storage": "INFY_AICLD_NUTANIX",
23
+ "uri": "{predictionDatasetURL}"
24
+ },
25
+ "label": "{label}",
26
+ "predlabel": "labels_pred"
27
+ },
28
+ "features": "{features}",
29
+ "categoricalAttributes": "{categoricalAttributes}",
30
+ "favourableOutcome": [
31
+ "{favourableOutcome}"
32
+ ],
33
+ "labelmaps": {
34
+ "{favourableOutcome}": 1,
35
+ "{unfavourableOutcome}": 0
36
+ },
37
+ "facet": "",
38
+ "outputPath": {
39
+ "storage": "INFY_AICLD_NUTANIX",
40
+ "uri": "responsible-ai//responsible-ai-fairness//output_api.json"
41
+ }
42
+ }
output/UIanalyseRequestPayload.txt ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "{method}",
3
+ "biasType": "{biasType}",
4
+ "taskType": "{taskType}",
5
+ "fileid":"{fileid}",
6
+ "label": "{label}",
7
+ "trainingDataset": {
8
+ "id": 32,
9
+ "name": "{name}",
10
+ "fileType": "text/csv",
11
+ "path": {
12
+ "storage": "INFY_AICLD_NUTANIX",
13
+ "uri": "{trainingDatasetURL}"
14
+ },
15
+ "label": "{label}"
16
+ },
17
+ "predictionDataset": {
18
+ "id": 32,
19
+ "name": "{name}",
20
+ "fileType": "text/csv",
21
+ "path": {
22
+ "storage": "INFY_AICLD_NUTANIX",
23
+ "uri": "{predictionDatasetURL}"
24
+ },
25
+ "label": "{label}",
26
+ "predlabel": "{predLabel}"
27
+ },
28
+ "features": "{features}",
29
+ "categoricalAttributes": "{categoricalAttributes}",
30
+ "favourableOutcome": [
31
+ "{favourableOutcome}"
32
+ ],
33
+ "labelmaps": {
34
+ "{favourableOutcome}": 1,
35
+ "{unfavourableOutcome}": 0
36
+ },
37
+ "facet": "",
38
+ "outputPath": {
39
+ "storage": "INFY_AICLD_NUTANIX",
40
+ "uri": "responsible-ai//responsible-ai-fairness//"
41
+ }
42
+ }
output/UIanalyseRequestPayloadUpload.txt ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "{method}",
3
+ "biasType": "{biasType}",
4
+ "taskType": "{taskType}",
5
+ "trainingDataset": {
6
+ "id": 32,
7
+ "name": "{name}",
8
+ "fileType": "text/csv",
9
+ "path": {
10
+ "storage": "INFY_AICLD_NUTANIX",
11
+ "uri": "{trainingDatasetURL}"
12
+ },
13
+ "label": "{label}"
14
+ },
15
+ "predictionDataset": {
16
+ "id": 32,
17
+ "name": "{name}",
18
+ "fileType": "text/csv",
19
+ "path": {
20
+ "storage": "INFY_AICLD_NUTANIX",
21
+ "uri": "{predictionDatasetURL}"
22
+ },
23
+ "label": "{label}",
24
+ "predlabel": "{predlabel}"
25
+ },
26
+ "features": "{features}",
27
+ "categoricalAttributes": "{categoricalAttributes}",
28
+ "favourableOutcome": [
29
+ "{favourableOutcome}"
30
+ ],
31
+ "labelmaps": {
32
+ "{favourableOutcome}": 1,
33
+ "{unfavourableOutcome}": 0
34
+ },
35
+ "facet": "",
36
+ "outputPath": {
37
+ "storage": "INFY_AICLD_NUTANIX",
38
+ "uri": "responsible-ai//responsible-ai-fairness//"
39
+ }
40
+ }
output/UItoNutanixStorage/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/aware_model/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/datasets/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/graphs/rates/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/graphs/representation/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/graphs/success_rates/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/mitigated_model/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/model/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
output/transformedDataset/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore
requirements/blackduck.bat ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ C:\Python\Python3.9.13\Python3.9.13\python.exe -m venv p_env
2
+ p_env\scripts\activate & pip install -r requirements_blackduck.txt --index-url
requirements/requirements.txt ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ fastapi==0.110.1
2
+ pydantic==2.9.2
3
+ uvicorn==0.32.0
4
+ aif360==0.6.1
5
+ pandas==2.1.1
6
+ pyyaml==6.0.2
7
+ spacy==3.6.1
8
+ python-multipart==0.0.12
9
+ BlackBoxAuditing== 0.1.54
10
+ pymongo==4.10.1
11
+ python-dotenv==1.0.1
12
+ joblib
13
+ fairlearn==0.10.0
14
+ pyarrow==13.0.0
15
+ openai==1.52.2
16
+ azure.identity==1.19.0
17
+ google-generativeai==0.8.3
18
+ transformers==4.33.3
19
+ python-jose== 3.3.0
20
+ torch==2.1.0
21
+ holisticai == 0.7.3
22
+ numpy == 1.26.4
23
+ scipy==1.11.2
24
+ fpdf
25
+ backoff
26
+ seaborn
27
+ httpx==0.27.0
28
+ cvxpy==1.6.0
29
+ matplotlib==3.9.4
30
+ lib/infosys_responsible_ai_fairness-1.1.5-py2.py3-none-any.whl
requirements/requirements_blackduck.txt ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ fastapi==0.110.1
2
+ cvxpy
3
+ pydantic
4
+ uvicorn
5
+ aif360
6
+ pandas
7
+ pyyaml
8
+ numpy
9
+ spacy
10
+ python-multipart
11
+ holisticai
12
+ BlackBoxAuditing
13
+ pymongo
14
+ python-dotenv
15
+ joblib
16
+ fairlearn
17
+ pyarrow
18
+ openai==0.28.0
19
+ google-generativeai
20
+ transformers
21
+ torch
22
+ ../lib/infosys_responsible_ai_fairness-1.1.3-py2.py3-none-any.whl
setup.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Copyright 2024-2025 Infosys Ltd.”
3
+
4
+ Use of this source code is governed by MIT license that can be found in the LICENSE file or at
5
+ MIT license https://opensource.org/licenses/MIT
6
+
7
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
8
+
9
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
10
+
11
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
12
+
13
+ """
14
+ from setuptools import find_packages,setup
15
+ from pathlib import Path
16
+
17
+ def get_install_requires() -> list[str]:
18
+ """Returns requirements.txt parsed to a list"""
19
+ fname = Path(__file__).parent / 'requirement/requirements.txt'
20
+ targets = []
21
+ if fname.exists():
22
+ with open(fname, 'r') as f:
23
+ targets = f.read().splitlines()
24
+ return targets
25
+
26
+ if __name__ == '__main__':
27
+ setup(
28
+ name='responsible-ai-fairness',
29
+ url="responsible_ai_fairness",
30
+ packages=find_packages(),
31
+ include_package_data=True,
32
+ python_requires='>=3.6',
33
+ version='1.1.3',
34
+ description='AI Cloud Project Management Services',
35
+ install_requires=get_install_requires(),
36
+ license='MIT',
37
+ )
src/.coverage ADDED
Binary file (53.2 kB). View file