# bai-monitoring-api ## 1. Install requirements.txt Install "requirements.txt" on your enviroment before running these codes. ## 2. Config config.py "config.py" is for "app.py". Configure the factors like ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY,,,etc You can find them in [ Backend.ai Control Pannel ]. ## 3. Explanations of app.py This is a api app which returns Backend.ai agent's current utilizations through json format. Graphql api is used for fetching data from [ Backend.ai manager hub server ]. "generate_signature()" function generates all the signature needed for the connections with Backend.ai manager hub server. "query_agent_list()" function queries some fields like {'id','occupied_slots','available_slots','live_stat'} from Agent Class. "extract_utilization()" function makes json results we want "myfunction_api()" function includes the Flask framework. Flask could make this code to an api application. - host= ‘0.0.0.0’ => This settings make possible to access api from outside. - port=‘31000’ => Port number accords to kubernetes's general port range. Except the numbers you already using. - debug=True ## 4. Run api server [ python3 agent_list_api_flask.py ] It is running as a debug mode, so that you could see the logs in the console whenever api server gets requests. ![image.png](./image.png) ## 5. api url : [http://10.231.238.231:31000/api/getMonitoring] response example { "results": { "item1": { "cpu": 3.94, "cuda": 99.12, "disk": "25.02", "id": "i-ai-1", "mem": "4.85" }, "item2": { "cpu": 4.69, "cuda": 14.12, "disk": "8.44", "id": "i-ai-2", "mem": "5.29" }, "item3": { "cpu": 2.08, "cuda": 12.5, "disk": "8.28", "id": "i-ai-3", "mem": "3.63" }, "item4": { "cpu": 4.25, "cuda": 16.5, "disk": "8.35", "id": "i-ai-4", "mem": "5.49" }, "item5": { "cpu": 0.84, "cuda": 24.38, "disk": "41.31", "id": "i-ai-5", "mem": "3.48" }, "item6": { "cpu": 2.14, "cuda": 0.0, "disk": "79.48", "id": "i-ai-6", "mem": "3.55" }, "time": "2023-03-14T11:20:44.488009+09:00" } } ## 6. Further usage (running on k8s) - This git repo is used for the Dockerfile. - Build a custom docker image by using Dockerfile. - The custom docker image is for making kubernetes pods. - Deploy the pods and service in k8s Cluster. - Finally api server is running on k8s pods. You access api url with [ http://: ]