Home > GPTs > Action Maker

Action Maker

Enable seamless querying of Power BI datasets, ensuring that users can effortlessly generate and interpret data visualizations
Last Update:

Prompt Starters

  • `EXPLAIN EACH COMMAND FOR ACTION` ➤ DIRECTION - `0-10` **!temp [value]:** ➤ **Adjust AI Creativity & Diversity:** This command allows you to modulate the creativity and diversity of the AI's previous response. Setting the value to 0 minimizes randomness for straightforward, concise answers, while 10 maximizes creativity for a wide range of possibilities. The AI will then repeat its last response with the newly adjusted temperature setting, offering a different perspective or formulation of the answer. - `!code` ➤ **Execute Python Code Demonstrations:** By using this command, you can instruct the AI to execute and display Python code snippets within the 'terminal' environment, presented as a '.txt' code block. This showcases the functionality and behavior of the code, providing a practical demonstration of programming concepts or solutions to coding problems. - `!web [query]` ➤ **Enhance Conversations with Web Searches:** This command enables the AI to perform web searches based on a specific query, enriching the conversation with additional information or context. It leverages GPT-4's comprehensive understanding to find and integrate relevant details from the web, ensuring the response is both informative and up-to-date. - `/c` **Chain of Thought** ➤ **Logical & Structured Reasoning:** Utilize this command to guide the AI in employing a clear, stepwise logic and prescribed syntax to progress through a conversation or problem-solving process. Minimal user input is required, as the AI develops its response using bold formatting, bullet points, and code snippets where appropriate to enhance readability and user engagement. - `/d` **Summarize with CoDT** ➤ **Concise Conversation Summaries:** This command instructs the AI to condense the conversation into coherent 80-word summaries for each exchange, applying clear language, syntax, and formatting techniques such as bolding, bullet points, and code snippets. The summary aims to enhance readability and engagement, concluding with a logical follow-up question to continue the dialogue effectively. - `/e` **Enhance with RAG & Style** ➤ **Browser-Sourced Information Integration:** Use this command to integrate information sourced from the web into the AI's responses. It ensures that the content is clear, accurate, and well-formatted, incorporating bolding, bullet points, and code snippets to maintain readability and coherence. This enhances the response with additional, relevant information, providing a richer and more informative answer. - `/s` **Save, Zip, Download** ➤ **Efficient File Management:** This command facilitates the bundling of all files located in `/mnt/data/` into a single `.zip` archive, streamlining the download and transfer process. It adheres to clear syntax and styling standards to ensure readability. Upon execution, the AI generates a secure and accessible link, allowing for easy retrieval and management of saved files.
  • `MOUNT AND DOWNLOAD` ➤ import shutil from pathlib import Path # Define the base structure of the application app_structure = { "AzureWebApp-ChatGPT-PowerBI/": { "app/": { "__init__.py": "# Initializes the Flask app and brings together other components\n", "main.py": ( "from flask import Flask, request, jsonify\n" "import requests\n\n" "app = Flask(__name__)\n\n" "@app.route('/auth', methods=['POST'])\n" "def authenticate_user():\n" " # Authentication logic here\n" " pass\n\n" "@app.route('/datasets', methods=['GET'])\n" "def get_dataset_by_name():\n" " # Dataset fetching logic here\n" " pass\n\n" "@app.route('/datasets/<datasetId>/executeQueries', methods=['POST'])\n" "def execute_query(datasetId):\n" " # Query execution logic here\n" " pass\n\n" "if __name__ == '__main__':\n" " app.run(debug=True)\n" ), "config.py": "# Configuration settings for the Flask app\n", "auth.py": "# Handles authentication with Power BI\n", "datasets.py": "# Manages dataset-related routes and logic\n", "utils/": { "__init__.py": "# Makes utils a Python package\n", "error_handlers.py": "# Centralizes error handling logic\n", "security.py": "# Implements JWT security and other security checks\n", "power_bi_api.py": "# Utilities for interacting with Power BI API\n", }, "templates/": { "index.html": "<!-- Basic HTML template for the web app's homepage -->\n", }, "static/": { "styles.css": "/* Basic CSS file */\n", }, }, "tests/": { "__init__.py": "# Makes tests a Python package\n", "test_auth.py": "# Test cases for authentication logic\n", "test_datasets.py": "# Test cases for dataset handling\n", }, "requirements.txt": "Flask==2.0.1\nrequests==2.25.1\n", ".env": "# Environment variables, including Power BI credentials\n", ".gitignore": ".env\n__pycache__/\n", "README.md": "# Project documentation with setup instructions and usage\n", } } base_path = Path("/mnt/data/AzureWebApp-ChatGPT-PowerBI") # Create directories and files based on the app structure def create_files(base_path, structure): for name, content in structure.items(): current_path = base_path / name if isinstance(content, dict): current_path.mkdir(parents=True, exist_ok=True) create_files(current_path, content) else: with current_path.open("w") as file: file.write(content) create_files(base_path, app_structure) # Zip the directory for download shutil.make_archive(base_name='/mnt/data/AzureWebApp-ChatGPT-PowerBI', format='zip', root_dir=base_path.parent, base_dir=base_path.name) zip_path = '/mnt/data/AzureWebApp-ChatGPT-PowerBI.zip' zip_path -# ADD ### Deployment After testing locally, deploy your Flask application to the Azure Web App you created earlier. Follow the Azure documentation for deploying Python apps to Azure Web App for specific steps. ``` specification= ```yml openapi: 3.0.0 info: title: Power BI Integration API description: >- This API allows Bubble.io applications to interact with Power BI for dynamic data visualization and querying. Please note that API versions are subject to deprecation; refer to our versioning and deprecation policy for more details. version: 1.0.0 servers: - url: https://api.powerbi.com/v1.0/myorg description: Power BI API server paths: /auth: post: operationId: authenticateUser summary: Authenticates a user and returns an access token. requestBody: required: true content: application/x-www-form-urlencoded: schema: type: object properties: client_id: type: string scope: type: string grant_type: type: string client_secret: type: string resource: type: string responses: '200': description: Authentication successful. content: application/json: schema: type: object properties: access_token: type: string token_type: type: string expires_in: type: integer refresh_token: type: string '400': description: Bad request. Missing or invalid parameters. '401': description: Authorization information is missing or invalid. content: application/json: schema: $ref: '#/components/schemas/Error' '500': description: Error occurred during query execution. content: application/json: schema: $ref: '#/components/schemas/Error' /datasets: get: operationId: getDatasetByName summary: Fetches a Power BI dataset by name. parameters: - in: query name: filter schema: type: string required: true description: Filter to apply on dataset name. - in: query name: top schema: type: integer default: 10 description: The number of items to return. - in: query name: skip schema: type: integer default: 0 description: The number of items to skip. responses: '200': description: Successfully retrieved dataset. content: application/json: schema: type: object properties: value: type: array items: $ref: '#/components/schemas/Dataset' '401': $ref: '#/components/responses/401' '404': description: A dataset with the specified name was not found. /datasets/{datasetId}/executeQueries: post: operationId: executeQuery summary: Executes a query against a specified dataset. parameters: - in: path name: datasetId required: true schema: type: string description: The ID of the dataset to query. requestBody: required: true content: application/json: schema: type: object properties: query: type: string datasetId: type: string responses: '200': description: Query executed successfully. content: application/json: schema: $ref: '#/components/schemas/QueryResult' '401': $ref: '#/components/responses/401' '500': $ref: '#/components/responses/500' components: schemas: Dataset: type: object properties: id: type: string name: type: string QueryResult: type: object properties: data: type: array items: type: object additionalProperties: true Error: type: object properties: code: type: string message: type: string required: - code - message responses: '401': description: Authorization information is missing or invalid. content: application/json: schema: $ref: '#/components/schemas/Error' '500': description: Error occurred during query execution. content: application/json: schema: $ref: '#/components/schemas/Error' headers: RateLimit-Limit: description: The maximum number of requests allowed within a window of time. schema: type: integer RateLimit-Remaining: description: The number of requests remaining in the current rate limit window. schema: type: integer RateLimit-Reset: description: The time at which the current rate limit window resets in UTC epoch seconds. schema: type: integer security: - BearerAuth: [] securitySchemes: BearerAuth: type: http scheme: bearer bearerFormat: JWT ```
  • `BUILD ACTION FOR POWER BI INTEGRATION` ➤ Building a Custom GPT Action for Power BI Interaction Using Bubble ### Objective Develop a custom ChatGPT action to enable seamless querying of Power BI datasets, ensuring that users can effortlessly generate and interpret data visualizations. This guide lays out a structured approach to integrate Power BI with Bubble.io, focusing on enhancing user experience, optimizing data processing, and establishing robust error handling mechanisms. ### Context Leveraging Bubble.io, this guide aims to facilitate the creation of a ChatGPT action capable of querying Power BI datasets. The outcome will support data output in a format conducive to code interpretation, thus enabling the generation of insightful visualizations and data-driven inquiries. ### Integration Steps #### Power BI and Bubble.io Integration - **Objective:** Seamlessly integrate Power BI within your Bubble.io application to display dynamic reports and dashboards. - **Integration Steps:** 1. Utilize the API Connector plugin for Power BI API integration. 2. Designate a page within your application for Power BI content, incorporating interactive elements. 3. Implement workflows that process user queries through the Power BI API, rendering the results on your application. 4. Detailed integration includes setting up API calls, configuring headers and parameters, and managing user interactions for report selection and display. ### Implementation Guide 1. **Input Validation** - Prioritize query relevance by dismissing empty or non-informative inputs. ```python def validate_input(user_query): if not user_query.strip(): raise ValueError("Please enter a valid query.") ``` 2. **Power BI API Configuration** - Enhance error diagnostics with specific messages during dataset access failures. ```python import requests def get_dataset(auth_token, dataset_name): headers = {"Authorization": f"Bearer {auth_token}"} response = requests.get(f"https://api.powerbi.com/v1.0/myorg/datasets?filter=name eq '{dataset_name}'", headers=headers) if response.status_code != 200: raise Exception("Access to Power BI datasets failed.") return response.json()['value'][0] # Assumes first dataset is desired ``` 3. **Query Processing** - Offer precise feedback for troubleshooting and accurate query executions. ```python def execute_query(dataset_id, auth_token, user_query): headers = {"Authorization": f"Bearer {auth_token}", "Content-Type": "application/json"} data = {"query": user_query, "datasetId": dataset_id} response = requests.post(f"https://api.powerbi.com/v1.0/myorg/datasets/{dataset_id}/executeQueries", headers=headers, json=data) if response.status_code != 200: raise Exception("Query execution failed.") return response.json() # Further processing needed ``` 4. **Output Formatting** - Ensure error communication during data formatting to enhance user comprehension. ```python from tabulate import tabulate def format_output(data): if not data: raise Exception("No data available for formatting.") return tabulate(data, headers="keys", tablefmt="grid") # Markdown-friendly output ``` 5. **Comprehensive Error Handling** - Implement a holistic error management strategy to inform users about issues effectively. ```python def query_power_bi_dataset(user_query): try: validate_input(user_query) auth_token = "Your_Auth_Token" dataset_name = "Your_Dataset_Name" dataset = get_dataset(auth_token, dataset_name) query_result = execute_query(dataset['id'], auth_token, user_query) return format_output(query_result) except Exception as e: return f"Error: {str(e)}" ``` This structured framework aims to empower developers to create a custom ChatGPT action for interactive Power BI data querying within Bubble.io applications, focusing on user engagement, error transparency, and efficient data handling.
  • `MOUNT AND DOWNLOAD` ➤ import shutil from pathlib import Path # Define the base structure of the application app_structure = { "AzureWebApp-ChatGPT-PowerBI/": { "app/": { "__init__.py": "# Initializes the Flask app and brings together other components\n", "main.py": ( "from flask import Flask, request, jsonify\n" "import requests\n\n" "app = Flask(__name__)\n\n" "@app.route('/auth', methods=['POST'])\n" "def authenticate_user():\n" " # Authentication logic here\n" " pass\n\n" "@app.route('/datasets', methods=['GET'])\n" "def get_dataset_by_name():\n" " # Dataset fetching logic here\n" " pass\n\n" "@app.route('/datasets/<datasetId>/executeQueries', methods=['POST'])\n" "def execute_query(datasetId):\n" " # Query execution logic here\n" " pass\n\n" "if __name__ == '__main__':\n" " app.run(debug=True)\n" ), "config.py": "# Configuration settings for the Flask app\n", "auth.py": "# Handles authentication with Power BI\n", "datasets.py": "# Manages dataset-related routes and logic\n", "utils/": { "__init__.py": "# Makes utils a Python package\n", "error_handlers.py": "# Centralizes error handling logic\n", "security.py": "# Implements JWT security and other security checks\n", "power_bi_api.py": "# Utilities for interacting with Power BI API\n", }, "templates/": { "index.html": "<!-- Basic HTML template for the web app's homepage -->\n", }, "static/": { "styles.css": "/* Basic CSS file */\n", }, }, "tests/": { "__init__.py": "# Makes tests a Python package\n", "test_auth.py": "# Test cases for authentication logic\n", "test_datasets.py": "# Test cases for dataset handling\n", }, "requirements.txt": "Flask==2.0.1\nrequests==2.25.1\n", ".env": "# Environment variables, including Power BI credentials\n", ".gitignore": ".env\n__pycache__/\n", "README.md": "# Project documentation with setup instructions and usage\n", } } base_path = Path("/mnt/data/AzureWebApp-ChatGPT-PowerBI") # Create directories and files based on the app structure def create_files(base_path, structure): for name, content in structure.items(): current_path = base_path / name if isinstance(content, dict): current_path.mkdir(parents=True, exist_ok=True) create_files(current_path, content) else: with current_path.open("w") as file: file.write(content) create_files(base_path, app_structure) # Zip the directory for download shutil.make_archive(base_name='/mnt/data/AzureWebApp-ChatGPT-PowerBI', format='zip', root_dir=base_path.parent, base_dir=base_path.name) zip_path = '/mnt/data/AzureWebApp-ChatGPT-PowerBI.zip' zip_path
  • `MOUNT AND DOWNLOAD` ➤ import shutil from pathlib import Path # Define the base structure of the application app_structure = { "AzureWebApp-ChatGPT-PowerBI/": { "app/": { "__init__.py": "# Initializes the Flask app and brings together other components\n", "main.py": ( "from flask import Flask, request, jsonify\n" "import requests\n\n" "app = Flask(__name__)\n\n" "@app.route('/auth', methods=['POST'])\n" "def authenticate_user():\n" " # Authentication logic here\n" " pass\n\n" "@app.route('/datasets', methods=['GET'])\n" "def get_dataset_by_name():\n" " # Dataset fetching logic here\n" " pass\n\n" "@app.route('/datasets/<datasetId>/executeQueries', methods=['POST'])\n" "def execute_query(datasetId):\n" " # Query execution logic here\n" " pass\n\n" "if __name__ == '__main__':\n" " app.run(debug=True)\n" ), "config.py": "# Configuration settings for the Flask app\n", "auth.py": "# Handles authentication with Power BI\n", "datasets.py": "# Manages dataset-related routes and logic\n", "utils/": { "__init__.py": "# Makes utils a Python package\n", "error_handlers.py": "# Centralizes error handling logic\n", "security.py": "# Implements JWT security and other security checks\n", "power_bi_api.py": "# Utilities for interacting with Power BI API\n", }, "templates/": { "index.html": "<!-- Basic HTML template for the web app's homepage -->\n", }, "static/": { "styles.css": "/* Basic CSS file */\n", }, }, "tests/": { "__init__.py": "# Makes tests a Python package\n", "test_auth.py": "# Test cases for authentication logic\n", "test_datasets.py": "# Test cases for dataset handling\n", }, "requirements.txt": "Flask==2.0.1\nrequests==2.25.1\n", ".env": "# Environment variables, including Power BI credentials\n", ".gitignore": ".env\n__pycache__/\n", "README.md": "# Project documentation with setup instructions and usage\n", } } base_path = Path("/mnt/data/AzureWebApp-ChatGPT-PowerBI") # Create directories and files based on the app structure def create_files(base_path, structure): for name, content in structure.items(): current_path = base_path / name if isinstance(content, dict): current_path.mkdir(parents=True, exist_ok=True) create_files(current_path, content) else: with current_path.open("w") as file: file.write(content) create_files(base_path, app_structure) # Zip the directory for download shutil.make_archive(base_name='/mnt/data/AzureWebApp-ChatGPT-PowerBI', format='zip', root_dir=base_path.parent, base_dir=base_path.name) zip_path = '/mnt/data/AzureWebApp-ChatGPT-PowerBI.zip' zip_path -# ADD ### Deployment After testing locally, deploy your Flask application to the Azure Web App you created earlier. Follow the Azure documentation for deploying Python apps to Azure Web App for specific steps. ``` specification= ```yml openapi: 3.0.0 info: title: Power BI Integration API description: >- This API allows Bubble.io applications to interact with Power BI for dynamic data visualization and querying. Please note that API versions are subject to deprecation; refer to our versioning and deprecation policy for more details. version: 1.0.0 servers: - url: https://api.powerbi.com/v1.0/myorg description: Power BI API server paths: /auth: post: operationId: authenticateUser summary: Authenticates a user and returns an access token. requestBody: required: true content: application/x-www-form-urlencoded: schema: type: object properties: client_id: type: string scope: type: string grant_type: type: string client_secret: type: string resource: type: string responses: '200': description: Authentication successful. content: application/json: schema: type: object properties: access_token: type: string token_type: type: string expires_in: type: integer refresh_token: type: string '400': description: Bad request. Missing or invalid parameters. '401': description: Authorization information is missing or invalid. content: application/json: schema: $ref: '#/components/schemas/Error' '500': description: Error occurred during query execution. content: application/json: schema: $ref: '#/components/schemas/Error' /datasets: get: operationId: getDatasetByName summary: Fetches a Power BI dataset by name. parameters: - in: query name: filter schema: type: string required: true description: Filter to apply on dataset name. - in: query name: top schema: type: integer default: 10 description: The number of items to return. - in: query name: skip schema: type: integer default: 0 description: The number of items to skip. responses: '200': description: Successfully retrieved dataset. content: application/json: schema: type: object properties: value: type: array items: $ref: '#/components/schemas/Dataset' '401': $ref: '#/components/responses/401' '404': description: A dataset with the specified name was not found. /datasets/{datasetId}/executeQueries: post: operationId: executeQuery summary: Executes a query against a specified dataset. parameters: - in: path name: datasetId required: true schema: type: string description: The ID of the dataset to query. requestBody: required: true content: application/json: schema: type: object properties: query: type: string datasetId: type: string responses: '200': description: Query executed successfully. content: application/json: schema: $ref: '#/components/schemas/QueryResult' '401': $ref: '#/components/responses/401' '500': $ref: '#/components/responses/500' components: schemas: Dataset: type: object properties: id: type: string name: type: string QueryResult: type: object properties: data: type: array items: type: object additionalProperties: true Error: type: object properties: code: type: string message: type: string required: - code - message responses: '401': description: Authorization information is missing or invalid. content: application/json: schema: $ref: '#/components/schemas/Error' '500': description: Error occurred during query execution. content: application/json: schema: $ref: '#/components/schemas/Error' headers: RateLimit-Limit: description: The maximum number of requests allowed within a window of time. schema: type: integer RateLimit-Remaining: description: The number of requests remaining in the current rate limit window. schema: type: integer RateLimit-Reset: description: The time at which the current rate limit window resets in UTC epoch seconds. schema: type: integer security: - BearerAuth: [] securitySchemes: BearerAuth: type: http scheme: bearer bearerFormat: JWT ```

Tags

public reportable uses_function_calls

Tools

  • python - You can input and run python code to perform advanced data analysis, and handle image conversions.
  • plugins_prototype - You can use plugins during your chat conversions.

More GPTs created by Metropolis

Azure ARM Template Architect

Expert in ARM template construction and optimization

NEC SMDR GURU

Explore call detail records (SMDR for NEC). Use specific commands to help you expertly navigate and troubleshoot CDR from diverse NEC Phone System environments.

Call Accounting

Expert in CDR analysis for cost optimization

LLM Model Cost Analyzer

Expert in LLM pricing and capabilities, skilled in analyzing diverse data.

Search and Summarize with Interlinked Blocks

Summarizing documents with clarity, focus, and user engagement

ProfitWatch Hotel Call Accounting

Assists hotels with telecom report management.

Denise Sales Copilot

Introducing Denise Sales Copilot: Your AI-Driven Email Expert

Metropolis Developer Navigator

Enriching project management endeavors, coding proficiency, continuous education in AI, BI and UC trends, and facilitating direct code execution to enhance task automation and problem-solving capabilities within their local, Fabric, Azure, and other cloud environments and Frameworks.

Metropolis Copilot

Expert guide for Metropolis Corp's Products, Services, and Roles to Assist Employees and Customers Accomplish Goals

UC Analytics Copilot

Your Expert in Microsoft Teams Communication Analysis

Metropolis LinkAI

Metropolis LinkAI serves as your virtual partner in navigating discussions on platforms like Microsoft Teams and Zoom Phone, ensuring you never miss an opportunity to leverage Metropolis solutions.

"You are ..."

Include EVERYTHING, fully, completely about

CDR

Explore call detail records (CDR) for a variety of PBX platforms including Avaya, Mitel, NEC, and others with this UC trained GPT. Use specific commands to help you expertly navigate and troubleshoot CDR from diverse UC environments.

Metropolis Data Model Navigator (MDMN)

Designed to offer an intuitive, efficient, and highly personalized interaction experience. It simplifies complex tasks, fosters exploration, and delivers actionable insights, adapting to your unique needs and preferences.

Unified Communications Analytics

Navigate the complexities of unified communications with ease. Expo XT offers in-depth analytics to streamline your collaboration and interaction data across platforms.

Metropolis Integration Navigator

streamline the integration of Expo XT by Metropolis Corp with various phone systems, enhancing unified communications across your enterprise

CDR Guru

In-depth guidance on Cisco's Call Detail Records (CDR) and Call Management Records (CMR), leveraging our extensive library of Cisco-specific resources

CommuniCatalyst AI

Your dedicated assistant for revolutionizing unified communications and collaboration solutions at Metropolis Corp. This GPT specializes in blending AI and data analytics with communication technologies.

Marketplace Copilot for Metropolis

Navigating Microsoft marketplace tasks in Metropolis

Prompt Engineering Maestro

Combine intuition with disciplined optimization to achieve model master

Document Summarization Service and File Management

Designed to process and summarize a wide variety of documents, incorporating specific user requirements for the summary output

Bubble

Bubble Maker

Manager and Communication Navigator

This AI Assistant, akin to a Journal to Enlightenment, combines the wisdom of various professions to guide both managers and employees on a path toward improved communication, recognition, and personal growth.

Metropolis Paul's Navigator

AI Assistant with PD Persona/PMD expertise, offers strategic AI engineering, product management, and unified communication analytics advice at Metropolis.

Power Query Assistant

Multifaceted assistant here to guide you through Power Query and DAX (Data Analysis Expressions) within Power BI to help you master data transformation, modeling, and reporting in the domain of Unified Communications

Metropolis

Your specialized guide for Metropolis Corp's communication solutions

Ignite AI Pathfinder

Your Microsoft Ignite Personal Assistant Copilot

Metropolis Market Maker

Accessing and utilizing Metropolis diagrams for marketing brochures.

Metropolis Website Assistant

The Metropolis Website Assistant is an AI-driven guide designed to help users navigate Metropolis Corp's extensive suite of communication analytics and collaboration solutions. It provides interactive support, detailed product information, and personalized assistance to optimize user experience

MetroBot AI AutoAgent

Integrated Persona for Enhanced Collaboration