feat: add SR&ED tracking and project management tools

This commit introduces several new files and updates to support
SR&ED tracking and project management:

- Adds a template for SR&ED tasks to standardize issue creation.
- Adds a Makefile command to set up GitHub labels from a YAML file.
- Adds a Makefile command to export SR&ED-eligible issues to a
 Markdown file.
- Adds a Makefile command to create issues from a file.
- Adds documentation for SR&ED tracking and development
 conventions.
- Adds a YAML file to define GitHub labels.
- Adds scripts to set up GitHub labels, export issues, and create
 issues from a file.
- Updates the project plan to include SR&ED considerations.

These changes aim to improve project organization, facilitate
SR&ED claims, and streamline development workflows.
This commit is contained in:
bwnyasse
2025-11-13 11:33:52 -05:00
parent 5d718ff077
commit 6540d01175
10 changed files with 462 additions and 24 deletions

31
.github/ISSUE_TEMPLATE/sred_task.md vendored Normal file
View File

@@ -0,0 +1,31 @@
---
name: SR&ED Task
about: Use this template for a new development task that may be eligible for SR&ED.
title: '[Category] Short description of the task'
labels: 'sred-eligible'
---
### 🎯 Objective
*(A concise, one-sentence summary of what this issue aims to accomplish. Ex: "Connect the Events page to the development backend.")*
---
### 🔬 SR&ED Justification
* **Technological Uncertainty:** What is the technical challenge or unknown we are trying to solve? *(Ex: "Can the Data Connect generated SDK be performantly integrated with TanStack Query in our existing React architecture?")*
* **Systematic Investigation:** What is our planned approach to resolve this uncertainty? *(Ex: "We will build a PoC on the Events page, measure load times, and document the optimal integration pattern.")*
---
### 💻 Technical Implementation Notes
* **Key Files to Modify:** `file1.js`, `file2.gql`, etc.
* **Suggested Approach:** A brief description of the technical steps. *(Ex: "1. Define `listEvents` query in GraphQL. 2. Generate the SDK. 3. Create a `useEvents` hook that uses `useQuery`...")*
* **Considerations:** Potential pitfalls or points to watch out for. *(Ex: "Ensure loading and error states are handled correctly.")*
---
### ✅ Acceptance Criteria
*A checklist of what must be true for the task to be considered "done."*
- [ ] The code is implemented following the technical notes.
- [ ] All new code is linted and formatted.
- [ ] The functionality is tested and works as expected in the `dev` environment.
- [ ] *(Example for a UI task)* **Given** I am on the Events page, **when** the page loads, **then** I should see a list of events coming from the `dev` backend.

View File

@@ -61,3 +61,16 @@ help:
@echo " make help - Shows this help message." @echo " make help - Shows this help message."
@echo "--------------------------------------------------" @echo "--------------------------------------------------"
# --- Project Management ---
setup-labels:
@echo "--> Setting up GitHub labels..."
@./scripts/setup-github-labels.sh
export-issues:
@echo "--> Exporting GitHub issues to documentation..."
@./scripts/export_issues.sh
create-issues-from-file:
@echo "--> Creating GitHub issues from file..."
@./scripts/create_issues.py

View File

@@ -6,7 +6,7 @@ This document breaks down the technical roadmap into actionable tasks, assigned
## Milestone 1: Foundation & Dev Environment Setup ## Milestone 1: Foundation & Dev Environment Setup
*Goal: Establish a fully functional, shared `dev` environment on GCP/Firebase that all developers can connect to.* *Goal: Establish a fully functional, shared `dev` environment on GCP/Firebase and validate that all core components (Web, Mobile, Backend) can be built, deployed, and connected.*
### Infrastructure & Tooling (Primarily CTO) ### Infrastructure & Tooling (Primarily CTO)
- **Issue:** `[Infra] Setup Enpass for Team Credential Management` - **Issue:** `[Infra] Setup Enpass for Team Credential Management`
@@ -14,27 +14,45 @@ This document breaks down the technical roadmap into actionable tasks, assigned
- **Issue:** `[Infra] Create GCP/Firebase Projects (dev, staging, prod)` - **Issue:** `[Infra] Create GCP/Firebase Projects (dev, staging, prod)`
- **Description:** Set up the three distinct Google Cloud projects and associated Firebase projects. Enable required APIs (Auth, Cloud SQL, Data Connect). - **Description:** Set up the three distinct Google Cloud projects and associated Firebase projects. Enable required APIs (Auth, Cloud SQL, Data Connect).
- **Issue:** `[Infra] Create Multi-Env Makefile` - **Issue:** `[Infra] Create Multi-Env Makefile`
- **Description:** Create the main `Makefile` inspired by the reference project. It should handle environment switching (`ENV=dev/staging`) and orchestrate all build/deploy tasks. - **Description:** Create the main `Makefile` to handle environment switching (`ENV=dev/staging`) and orchestrate all build/deploy tasks.
- **Issue:** `[Infra] Setup Shared Dev Database` - **Issue:** `[Infra] Setup Shared Dev Database`
- **Description:** Provision the initial Cloud SQL for PostgreSQL instance for the `dev` environment. - **Description:** Provision the initial Cloud SQL for PostgreSQL instance for the `dev` environment.
### Backend & Web (Dev 1) ### Backend & Web (Dev 1)
- **Issue:** `[Backend] Define GraphQL Schema for Core Entities` - **Epic:** `[Onboarding] End-to-End Flow Validation with 'Event' Entity`
- **Description:** Translate `Event`, `Staff`, `Vendor`, and `User` schemas from `api_specification.md` into `.gql` files for Data Connect. - **Issue:** `[Backend] Define and Deploy 'Event' Schema`
- **Issue:** `[Backend] Deploy Initial Schema & Operations to Dev Env` - **Description:** Translate the `Event` schema from the API specification into `.gql` files. Define the basic `listEvents` query and `createEvent` mutation. Use the `Makefile` to deploy this to the `dev` environment and validate that the `events` table is created in Cloud SQL.
- **Description:** Use the `Makefile` to deploy the initial Data Connect schema and basic `list/get` queries to the `dev` project. - **Issue:** `[Web] Generate TypeScript SDK for Dev Env`
- **Issue:** `[Web] Generate TypeScript SDK for Dev Env`
- **Description:** Configure and run the SDK generation command to create the TypeScript SDK pointing to the `dev` environment. - **Description:** Configure and run the SDK generation command to create the TypeScript SDK pointing to the `dev` environment.
- **Issue:** `[Web] Connect Web App to Dev Backend` - **Issue:** `[Web] Connect 'Events' Page to Dev Backend (PoC)`
- **Description:** Modify the web app to use the generated SDK. The goal is to authenticate and display a list of events from the shared `dev` backend. - **Description:** Modify the main web application's `Events.jsx` page. Replace the existing mock/Base44 data fetching with the new TanStack Query hooks from the generated SDK to display a list of events from our own `dev` backend. This validates the full end-to-end workflow on a real feature.
- **Epic:** `[Backend] KROW Schema Implementation`
- **Issue:** `[Backend] Define GraphQL Schema for Remaining Core Entities`
- **Description:** Translate `Staff`, `Vendor`, `User`, and other core schemas from the API specification into `.gql` files and deploy them.
### Mobile (Dev 2) ### Mobile (Dev 2)
- **Issue:** `[Mobile] Generate Flutter SDK for Dev Env` - **Epic:** `[Mobile] Analysis & Documentation`
- **Description:** Configure and run the SDK generation command to create the Flutter SDK pointing to the `dev` environment. - **Issue:** `[Mobile-Doc] Analyze & Document Existing App Logic`
- **Issue:** `[Mobile] Implement Firebase Auth Flow` - **Description:** Review the legacy Flutter codebases to identify and document key business logic and user flows.
- **Description:** Ensure both mobile apps can sign in and sign up using Firebase Auth against the `dev` project. - **Issue:** `[Mobile-Doc] Create & Update Workflow Diagrams`
- **Issue:** `[Mobile] Create Proof-of-Concept Screen` - **Description:** Based on the analysis, create or update Mermaid diagrams for critical workflows and add them to the internal launchpad.
- **Description:** Build a simple screen in the Staff app that authenticates and fetches a list of events from the `dev` backend using the new SDK.
- **Epic:** `[Mobile] CI/CD & Skeleton App Setup`
- **Issue:** `[Mobile-CI/CD] Configure CodeMagic & Firebase App Distribution`
- **Description:** Set up CodeMagic and configure build workflows for iOS/Android with automated deployment to Firebase App Distribution.
- **Issue:** `[Mobile-CI/CD] Initialize Skeleton Apps in Monorepo`
- **Description:** Create new, clean Flutter projects for `client-app` and `staff-app` within the `mobile-apps` directory.
- **Issue:** `[Mobile-CI/CD] Implement Initial CI/CD Pipeline`
- **Description:** Create a "Hello World" version of the Staff app and validate that it can be automatically built and deployed to App Distribution.
- **Epic:** `[Mobile] Backend Integration Validation`
- **Issue:** `[Mobile-Auth] Implement Firebase Auth Flow in Skeleton App`
- **Description:** Add Firebase Authentication to the skeleton Staff app and ensure users can sign up/log in against the `dev` project.
- **Issue:** `[Mobile-Backend] Generate Flutter SDK for Dev Env`
- **Description:** Configure and run the SDK generation command to create the Flutter SDK for the `dev` environment.
- **Issue:** `[Mobile-Backend] Create Proof-of-Concept Screen`
- **Description:** Build a simple screen in the skeleton Staff app that, after login, fetches and displays a list of events from the `dev` backend using the new SDK.
--- ---
@@ -44,15 +62,15 @@ This document breaks down the technical roadmap into actionable tasks, assigned
### Backend (Dev 1) ### Backend (Dev 1)
- **Epic:** `[Backend] Implement Full API Logic` - **Epic:** `[Backend] Implement Full API Logic`
- **Description:** Create all necessary GraphQL queries and mutations in Data Connect for all entities defined in `api_specification.md`. Deploy them continuously to the `dev` environment. - **Description:** Create all necessary GraphQL queries and mutations in Data Connect for all entities. Deploy them continuously to the `dev` environment.
### Web (Dev 1, with support from Dev 2) ### Web (Dev 1, with support from Dev 2)
- **Epic:** `[Web] Full Application Re-wiring` - **Epic:** `[Web] Full Application Re-wiring`
- **Description:** Systematically replace all data-fetching logic in the web app to use the TanStack Query hooks from the generated Data Connect SDK. - **Description:** Systematically replace all data-fetching logic in the web app to use the TanStack Query hooks from the generated Data Connect SDK.
### Mobile (Dev 2) ### Mobile (Dev 2)
- **Epic:** `[Mobile] Full Application Re-wiring` - **Epic:** `[Mobile] Port Features to New Apps`
- **Description:** Refactor the `repositories` and `api_providers` in both the Client and Staff Flutter apps to use the generated Data Connect SDK for all network calls. - **Description:** Systematically port the features and UI from the legacy apps into the new, clean skeleton apps, connecting them to the Data Connect backend via the generated SDK.
--- ---
@@ -62,11 +80,11 @@ This document breaks down the technical roadmap into actionable tasks, assigned
### Infrastructure & DevOps (CTO & Team) ### Infrastructure & DevOps (CTO & Team)
- **Issue:** `[CI/CD] Configure Web App Deployment Pipeline` - **Issue:** `[CI/CD] Configure Web App Deployment Pipeline`
- **Description:** Set up a GitHub Actions pipeline that builds and deploys the web app to Firebase Hosting, with separate jobs for `staging` and `prod`. - **Description:** Set up a GitHub Actions pipeline to build and deploy the web app to Firebase Hosting (`staging` and `prod`).
- **Issue:** `[CI/CD] Configure Mobile App Deployment with CodeMagic` - **Issue:** `[CI/CD] Finalize Production Mobile Deployment`
- **Description:** Set up CodeMagic pipelines to build and deploy the iOS and Android apps to TestFlight/Play Store Internal Testing. - **Description:** Finalize the CodeMagic pipelines for deployment to TestFlight/Play Store production tracks.
- **Issue:** `[CI/CD] Configure Backend Deployment Pipeline` - **Issue:** `[CI/CD] Configure Backend Deployment Pipeline`
- **Description:** Automate the deployment of the Data Connect schema and operations (`firebase deploy --only dataconnect`). - **Description:** Automate the deployment of the Data Connect schema and operations.
- **Issue:** `[Data] Create & Test Initial Data Import Scripts` - **Issue:** `[Data] Create & Test Initial Data Import Scripts`
- **Description:** Write scripts to populate the production database with any necessary initial data. - **Description:** Write scripts to populate the production database with any necessary initial data.
- **Issue:** `[QA] Deploy to Staging & Perform E2E Testing` - **Issue:** `[QA] Deploy to Staging & Perform E2E Testing`

77
docs/09-sred-tracking.md Normal file
View File

@@ -0,0 +1,77 @@
# SR&ED Project Documentation - KROW Platform
This document serves as the primary record for tracking Scientific Research and Experimental Development (SR&ED) activities for the KROW project. It translates our project plan into the language of technological uncertainty and systematic investigation, as required for SR&ED claims.
## Overall Technological Uncertainty
The core technological uncertainty of this project is whether a unified backend, built on the novel Firebase Data Connect service, can effectively and performantly serve a heterogeneous set of clients (a React web app and two Flutter mobile apps) while maintaining data integrity in a complex relational model (PostgreSQL). This involves overcoming challenges in schema management, SDK generation, and real-time data synchronization across platforms, for which no standard industry solution exists.
---
## Milestone 1: Foundation & Dev Environment Setup
### 1.1. Technological Uncertainty
Can we establish a stable, multi-environment (dev, staging, prod) development workflow for a complex monorepo that integrates a declarative backend (Data Connect), a web frontend, and mobile frontends? The primary challenge is to create a reproducible setup that overcomes the limitations of local emulation and allows for parallel, collaborative development on a shared cloud infrastructure without conflicts.
### 1.2. Hypothesis
By combining a multi-environment `Makefile`, Firebase project aliases, and auto-generated, environment-aware SDKs, we hypothesize that we can create a streamlined and scalable development workflow. This approach should allow developers to seamlessly switch between cloud environments and ensure that all client applications (web and mobile) are always interacting with the correct backend instance.
### 1.3. Experimental Work
*(This section can be auto-populated by running `make export-issues` with the appropriate filters/labels.)*
- **`[Infra] Create Multi-Env Makefile`:** Development of a script to manage different cloud environments, which is a non-trivial engineering task involving environment variable injection and conditional logic.
- **`[Backend] Define GraphQL Schema & Deploy to Dev`:** Experimentation with the Data Connect schema-to-SQL generation process to validate its capabilities, performance with relational data, and limitations.
- **`[Web/Mobile] Generate & Integrate SDKs`:** Systematic investigation into the interoperability of the auto-generated SDKs with modern frontend frameworks (React/TanStack Query and Flutter/BLoC).
### 1.4. Results & Learnings
*(To be filled out upon milestone completion.)*
---
## Milestone 2: Core Feature Implementation
### 2.1. Technological Uncertainty
Once the foundational architecture is in place, the next uncertainty is whether the declarative nature of Data Connect is powerful enough to handle the complex business logic required by the KROW platform. Can we implement features like multi-step event creation, real-time status updates, and complex data validation purely through GraphQL mutations and queries, without needing a separate, imperative logic layer (like traditional Cloud Functions)?
### 2.2. Hypothesis
We hypothesize that by leveraging advanced GraphQL features and the underlying power of PostgreSQL (accessible via Data Connect), we can encapsulate most, if not all, of the core business logic directly within our Data Connect backend. This would create a more maintainable and "self-documenting" system where the API definition itself contains the business rules.
### 2.3. Experimental Work
*(This section can be auto-populated by running `make export-issues` with the appropriate filters/labels.)*
- **`[Backend] Implement Full API Logic`:** This involves systematically testing the limits of Data Connect's mutation capabilities to handle transactional logic and data validation.
- **`[Web/Mobile] Full Application Re-wiring`:** This work will test the performance and ergonomics of the generated SDKs at scale, across dozens of components and screens.
### 2.4. Results & Learnings
*(To be filled out upon milestone completion.)*
---
## Milestone 3: Production Readiness & Go-Live
### 3.1. Technological Uncertainty
The final uncertainty is whether our automated, monorepo-based deployment strategy is robust and reliable enough for production. Can we create CI/CD pipelines that can correctly build, test, and deploy three distinct artifacts (Web, Mobile, Backend) in a coordinated manner, while managing environment-specific configurations and secrets securely?
### 3.2. Hypothesis
We hypothesize that by using a combination of GitHub Actions for workflow orchestration and CodeMagic for specialized Flutter builds, managed by our central `Makefile`, we can create a fully automated "push-to-deploy" system for all environments.
### 3.3. Experimental Work
*(This section can be auto-populated by running `make export-issues` with the appropriate filters/labels.)*
- **`[CI/CD] Configure Deployment Pipelines`:** This involves significant engineering work to script and test the automated build and deployment processes for each part of the monorepo.
- **`[Data] Create & Test Initial Data Import Scripts`:** Development of reliable and idempotent scripts to populate the production database.
### 3.4. Results & Learnings
*(To be filled out upon milestone completion.)*

View File

@@ -0,0 +1,25 @@
# Development Conventions
This document outlines the development conventions for the KROW project, including our GitHub label system.
## GitHub Labels
We use a structured system of labels to categorize and prioritize our work. The single source of truth for all available labels, their descriptions, and their colors is the `labels.yml` file at the root of this repository.
To apply these labels to the GitHub repository, run the following command:
```bash
make setup-labels
```
## GitHub Issue Template
To ensure consistency and capture all necessary information for both development and SR&ED tracking, we use a standardized issue template.
When creating a new issue on GitHub, select the **"SR&ED Task"** template. This will pre-populate the issue description with the following sections:
- **🎯 Objective:** A one-sentence summary of the goal.
- **🔬 SR&ED Justification:** A section to detail the technological uncertainty and the systematic investigation.
- **💻 Technical Implementation Notes:** A place for technical guidance for the developer.
- **✅ Acceptance Criteria:** A checklist to define what "done" means for this task.
Using this template is mandatory for all new development tasks.

0
issues-to-create.md Normal file
View File

47
labels.yml Normal file
View File

@@ -0,0 +1,47 @@
# This file is the single source of truth for GitHub labels.
# Run 'make setup-labels' to apply these to the repository.
# By Type of Work
- name: "bug"
description: "Something isn't working"
color: "d73a4a"
- name: "feature"
description: "A new user-facing feature"
color: "0075ca"
- name: "enhancement"
description: "Minor improvement to an existing feature"
color: "a2eeef"
- name: "infra"
description: "Tasks for infrastructure, CI/CD, and project setup"
color: "a2eeef"
- name: "documentation"
description: "Tasks for creating or updating documentation"
color: "0075ca"
- name: "refactor"
description: "Code changes that neither fix a bug nor add a feature"
color: "f29513"
# By Platform
- name: "platform:web"
description: "Tasks specific to the React web app"
color: "5319e7"
- name: "platform:mobile"
description: "Tasks affecting both mobile apps"
color: "5319e7"
- name: "platform:backend"
description: "Tasks for Data Connect or Cloud Functions"
color: "5319e7"
# For Project Management
- name: "sred-eligible"
description: "Tasks identified as eligible for SR&ED claims"
color: "d876e3"
- name: "priority:high"
description: "Urgent or critical tasks"
color: "b60205"
- name: "priority:medium"
description: "Default priority"
color: "fbca04"
- name: "priority:low"
description: "Non-urgent, background tasks"
color: "0e8a16"

92
scripts/create_issues.py Executable file
View File

@@ -0,0 +1,92 @@
#!/usr/bin/env python3
import subprocess
import os
import re
# --- Configuration ---
INPUT_FILE = "issues-to-create.md"
PROJECT_TITLE = "Krow"
# ---
def create_issue(title, body, labels, milestone):
"""Creates a GitHub issue using the gh CLI."""
command = ["gh", "issue", "create"]
command.extend(["--title", title])
command.extend(["--body", body])
command.extend(["--project", PROJECT_TITLE])
if milestone:
command.extend(["--milestone", milestone])
for label in labels:
command.extend(["--label", label])
print(f" -> Creating issue: \"{title}\"")
try:
result = subprocess.run(command, check=True, text=True, capture_output=True)
print(result.stdout.strip())
except subprocess.CalledProcessError as e:
print(f"❌ ERROR: Failed to create issue '{title}'.")
print(f" Stderr: {e.stderr.strip()}")
def main():
"""Main function to parse the file and create issues."""
print(f"🚀 Starting bulk creation of GitHub issues from '{INPUT_FILE}'...")
if subprocess.run(["which", "gh"], capture_output=True).returncode != 0:
print("❌ ERROR: GitHub CLI (gh) is not installed.")
exit(1)
if not os.path.exists(INPUT_FILE):
print(f"❌ ERROR: Input file {INPUT_FILE} not found.")
exit(1)
print("✅ Dependencies and input file found.")
print(f"2. Reading and parsing {INPUT_FILE}...")
with open(INPUT_FILE, 'r') as f:
content = f.read()
# Split the content by lines starting with '# '
issue_blocks = re.split(r'\n(?=#\s)', content)
for block in issue_blocks:
if not block.strip():
continue
lines = block.strip().split('\n')
title = lines[0].replace('# ', '').strip()
labels_line = ""
milestone_line = ""
body_start_index = 1
# Find all metadata lines (Labels, Milestone) at the beginning of the body
for i, line in enumerate(lines[1:]):
line_lower = line.strip().lower()
if line_lower.startswith('labels:'):
labels_line = line.split(':', 1)[1].strip()
elif line_lower.startswith('milestone:'):
milestone_line = line.split(':', 1)[1].strip()
elif line.strip() == "":
continue # Ignore blank lines in the metadata header
else:
# This is the first real line of the body
body_start_index = i + 1
break
body = "\n".join(lines[body_start_index:]).strip()
labels = [label.strip() for label in labels_line.split(',') if label.strip()]
milestone = milestone_line
if not title:
print("⚠️ Skipping block with no title.")
continue
create_issue(title, body, labels, milestone)
print("\n🎉 Bulk issue creation complete!")
if __name__ == "__main__":
main()

63
scripts/export_issues.sh Executable file
View File

@@ -0,0 +1,63 @@
#!/bin/bash
# ====================================================================================
# SCRIPT TO EXPORT SR&ED-ELIGIBLE GITHUB ISSUES TO A MARKDOWN FILE
# ====================================================================================
set -e # Exit script if a command fails
# --- Configuration ---
OUTPUT_FILE="sred-issues-export.md"
# This is the label we will use to identify SR&ED-eligible tasks
SRED_LABEL="sred-eligible"
ISSUE_LIMIT=1000
echo "🚀 Starting export of SR&ED-eligible issues to '${OUTPUT_FILE}'..."
# --- Step 1: Dependency Check ---
echo "1. Checking for 'gh' CLI dependency..."
if ! command -v gh &> /dev/null; then
echo "❌ ERROR: GitHub CLI ('gh') is not installed. Please install it to continue."
exit 1
fi
echo "✅ 'gh' CLI found."
# --- Step 2: Initialize Output File ---
echo "# Export of SR&ED-Eligible Issues" > "$OUTPUT_FILE"
echo "" >> "$OUTPUT_FILE"
echo "*This document lists the systematic investigations and experimental development tasks undertaken during this period. Export generated on $(date)*." >> "$OUTPUT_FILE"
echo "" >> "$OUTPUT_FILE"
# --- Step 3: Fetch SR&ED-Eligible Issues ---
echo "2. Fetching open issues with the '${SRED_LABEL}' label..."
# We use 'gh issue list' with a JSON output and parse it with 'jq' for robustness.
# This is more reliable than parsing text output.
issue_numbers=$(gh issue list --state open --label "${SRED_LABEL}" --limit $ISSUE_LIMIT --json number | jq -r '.[].number')
if [ -z "$issue_numbers" ]; then
echo "⚠️ No open issues found with the label '${SRED_LABEL}'. The export file will be minimal."
echo "" >> "$OUTPUT_FILE"
echo "**No SR&ED-eligible issues found for this period.**" >> "$OUTPUT_FILE"
exit 0
fi
total_issues=$(echo "$issue_numbers" | wc -l | xargs)
echo "✅ Found ${total_issues} SR&ED-eligible issue(s)."
# --- Step 4: Loop Through Each Issue and Format the Output ---
echo "3. Formatting details for each issue..."
current_issue=0
for number in $issue_numbers; do
current_issue=$((current_issue + 1))
echo " -> Processing issue #${number} (${current_issue}/${total_issues})"
# Use 'gh issue view' with a template to format the output for each issue
# and append it to the output file.
gh issue view "$number" --json number,title,body,author,createdAt --template \
'\n### Task: [#{{.number}}] {{.title}}\n\n**Hypothesis/Goal:** \n> *(Briefly describe the technological uncertainty this task addresses. What was the technical challenge or question?)*\n\n**Systematic Investigation:**\n{{if .body}}\n{{.body}}\n{{else}}\n*No detailed description provided in the issue.*\n{{end}}\n\n**Team:** {{.author.login}} | **Date Initiated:** {{timefmt "2006-01-02" .createdAt}}\n***\n' >> "$OUTPUT_FILE"
done
echo ""
echo "🎉 Export complete!"
echo "Your SR&ED-ready markdown file is ready: ${OUTPUT_FILE}"

72
scripts/setup-github-labels.sh Executable file
View File

@@ -0,0 +1,72 @@
#!/bin/bash
# ====================================================================================
# SCRIPT TO SETUP STANDARD GITHUB LABELS FROM A YAML FILE (v4 - Robust Bash Parsing)
# ====================================================================================
set -e # Exit script if a command fails
LABELS_FILE="labels.yml"
echo "🚀 Setting up GitHub labels from '${LABELS_FILE}'..."
# --- Function to create or edit a label ---
create_or_edit_label() {
NAME=$1
DESCRIPTION=$2
COLOR=$3
if [ -z "$NAME" ] || [ -z "$DESCRIPTION" ] || [ -z "$COLOR" ]; then
echo "⚠️ Skipping invalid label entry."
return
fi
# The `gh api` command will exit with a non-zero status if the label is not found (404).
# We redirect stderr to /dev/null to silence the expected "Not Found" error message.
if gh api "repos/{owner}/{repo}/labels/${NAME}" --silent 2>/dev/null; then
echo " - Editing existing label: '${NAME}'"
gh label edit "${NAME}" --description "${DESCRIPTION}" --color "${COLOR}"
else
echo " - Creating new label: '${NAME}'"
gh label create "${NAME}" --description "${DESCRIPTION}" --color "${COLOR}"
fi
}
# --- Read and Parse YAML File using a robust while loop ---
# This approach is more reliable than complex sed/awk pipelines.
name=""
description=""
color=""
while IFS= read -r line || [[ -n "$line" ]]; do
# Skip comments and empty lines
if [[ "$line" =~ ^\s*# ]] || [[ -z "$line" ]]; then
continue
fi
# Check for name
if [[ "$line" =~ -[[:space:]]+name:[[:space:]]+\"(.*)\" ]]; then
# If we find a new name, and the previous one was complete, process it.
if [ -n "$name" ] && [ -n "$description" ] && [ -n "$color" ]; then
create_or_edit_label "$name" "$description" "$color"
# Reset for the next entry
description=""
color=""
fi
name="${BASH_REMATCH[1]}"
# Check for description
elif [[ "$line" =~ [[:space:]]+description:[[:space:]]+\"(.*)\" ]]; then
description="${BASH_REMATCH[1]}"
# Check for color
elif [[ "$line" =~ [[:space:]]+color:[[:space:]]+\"(.*)\" ]]; then
color="${BASH_REMATCH[1]}"
fi
done < "$LABELS_FILE"
# Process the very last label in the file
if [ -n "$name" ] && [ -n "$description" ] && [ -n "$color" ]; then
create_or_edit_label "$name" "$description" "$color"
fi
echo ""
echo "🎉 All standard labels have been created or updated successfully."