Newer
Older
Install "requirements.txt" on your enviroment before running these codes.
Configure the factors like ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY,,,etc
You can find them in [ Backend.ai Control Pannel ].
This is a api app which returns Backend.ai agent's current utilizations through json format.
Graphql api is used for fetching data from [ Backend.ai manager hub server ].
"generate_signature()" function generates all the signature needed for the connections with Backend.ai manager hub server.
"query_agent_list()" function queries some fields like {'id','occupied_slots','available_slots','live_stat'} from Agent Class.
"extract_utilization()" function makes json results we want
"myfunction_api()" function includes the Flask framework. Flask could make this code to an api application.
- host= ‘0.0.0.0’ => This settings make possible to access api from outside.
- port=‘31000’ => Port number accords to kubernetes's general port range. Except the numbers you already using.
- debug=True
It is running as a debug mode, so that you could see the logs in the console whenever api server gets requests.
![8ACCFB37-5919-4695-B853-2D9AA1A94802.jpeg](https://s3-us-west-2.amazonaws.com/secure.notion-static.com/0b46333b-aad0-4ba7-a886-4cffa5f0c2b3/8ACCFB37-5919-4695-B853-2D9AA1A94802.jpeg)
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
{
"results": {
"item1": {
"cpu": 3.94,
"cuda": 99.12,
"disk": "25.02",
"id": "i-ai-1",
"mem": "4.85"
},
"item2": {
"cpu": 4.69,
"cuda": 14.12,
"disk": "8.44",
"id": "i-ai-2",
"mem": "5.29"
},
"item3": {
"cpu": 2.08,
"cuda": 12.5,
"disk": "8.28",
"id": "i-ai-3",
"mem": "3.63"
},
"item4": {
"cpu": 4.25,
"cuda": 16.5,
"disk": "8.35",
"id": "i-ai-4",
"mem": "5.49"
},
"item5": {
"cpu": 0.84,
"cuda": 24.38,
"disk": "41.31",
"id": "i-ai-5",
"mem": "3.48"
},
"item6": {
"cpu": 2.14,
"cuda": 0.0,
"disk": "79.48",
"id": "i-ai-6",
"mem": "3.55"
},
"time": "2023-03-14T11:20:44.488009+09:00"
}
}
- Build a custom docker image by using Dockerfile.
- The custom docker image is for making kubernetes pods.
- Deploy the pods and service in k8s Cluster.
- Finally api server is running on k8s pods. You access api url with [ http://<host ip>:<service nodePort> ]