Posts

Showing posts from March, 2026

The Avocado Nutrient Roadmap: When to Switch from DAP to Potassium

  Introduction ​In my first post, we looked at the basics of starting an orchard in Kenya. Today, we’re getting technical. If you want your Hass and Fuerte trees to move from "looking green" to "bearing heavy," you have to understand that their diet must change as they age. Sticking with the same fertilizer for three years is a recipe for small fruits and weak trees. ​ The Early Years: Building the Engine" (Years 1-2) ​When you first transplant your seedling, the goal isn't fruit—it’s roots and a strong stem. ​ The Nitrogen/Phosphorus Phase: During this stage, I rely on DAP (Diammonium Phosphate) or DSP (Double Superphosphate) mixed with well-decomposed manure. ​ The Why: Phosphorus is the primary fuel for root development. Without a massive root system, your tree won't be able to "drink" enough water once it starts producing heavy avocados. ​ My Routine: I apply approximately 125g to 200g of DAP per tree, split into two applicat...

Starting an Avocado Orchard in Kenya: 5 Lessons from the Field

​Introduction ​Transitioning from a digital creator to an avocado farmer has been one of my most rewarding challenges. Whether you are looking at the export potential of Hass or the local market strength of Fuerte, starting an orchard in Kenya requires more than just digging a hole. In this post, I’m sharing the raw reality of managing young seedlings and the technical steps I’m taking to ensure a high-yield future. ​1. Seedling Selection: Hass vs. Fuerte ​In my orchard, I’ve chosen a mix of both varieties. ​Hass: The king of exports. It’s hardy, has a long shelf life, and the global demand is insatiable. ​Fuerte: Often used as a pollinator for Hass, but a powerhouse in its own right for the local Kenyan market and oil extraction. ​Tip: Always ensure you get grafted seedlings from certified nurseries or entities like KALRO to avoid "blind" trees that take years to fruit. ​2. The Critical First 6 Months ​:  https://photos.app.goo.gl/C91ogwsUBE7YvmYv6 My young avocado seedlin...

Project aura roadmap

As we reach a major milestone in the development of Project Aura, it is time to look at the path ahead. Integrating high-level AI with physical hardware requires a phased approach. Here is how we are scaling Aura Intelligence over the coming months. Phase 1: The Digital Foundation (Completed) ​Architecture: Successful integration of ROS 2 Jazzy and FastAPI. ​Simulation: Deployment of the Godot 4 Digital Twin with coi-serviceworker support for web browsers. ​Open Source: Establishing the Apache 2.0 licensed repository on GitHub. Phase 2: Hardware Synthesis (Q2 2026) ​Actuation: Finalizing the micro-stepping logic for NEMA 17 motors via Raspberry Pi 5 GPIO. ​VLA Integration: Testing NVIDIA GR00T (N1.6-3B) for basic object recognition and spatial reasoning in the Kenyan environment. ​Power Optimization: Refining buck converter efficiency for sustained field operations. Phase 3: Agricultural Edge-AI (Q3-Q4 2026) ​Orchard Deployment: Moving the prototype into the Powerdreams avocado grove...

Project aura and powerdreams

am the founder of Powerdreams and the lead developer of Project Aura. Based in Kenya, my work sits at the intersection of Precision Agriculture and Edge-AI Robotics. ​My journey began in the orchards, managing a mixed commercial grove of Hass and Fuerte avocado trees. Facing the real-world challenges of local soil health (KALRO standards) and nutrient management, I realized that the future of farming lies in automation. ​Today, I am building Project Aura—a robotics initiative focused on bridging the gap between digital twins and physical hardware. Using the Raspberry Pi 5, ROS 2 Jazzy, and NVIDIA’s GR00T N1.6-3B models, I am developing low-latency control systems for NEMA 17 actuators Aura Intelligence is my platform for sharing these technical breakthroughs, from FastAPI backend configurations to real-time Godot simulations. My goal is to empower the next generation of African creators to build high-performance, open-source technology that solves local and global challenges.

Actuating the Manifest: Syncing Project Aura with GitHub Codespaces

Image
The Engineering Milestone ​In my previous posts, we discussed the theoretical framework of Project Aura and the integration of NVIDIA GR00T. Today, we take the project live. I have officially deployed the project’s technical manifest and licensing structure using GitHub Codespaces, creating a professional "Source of Truth" for my robotics research. The Manifest (llms.txt) ​To facilitate better interaction with AI-driven development tools and search crawlers, I have introduced a llms.txt file in the root directory. This manifest provides immediate context for our hardware stack: ​Compute: Raspberry Pi 5 ​Middleware: ROS 2 Jazzy ​Actuation: NEMA 17 Steppers + A4988 Drivers ​Cloud: Google Cloud Vertex AI Open Source Governance ​Transparency is key in robotics. I have chosen the Apache License 2.0 for this repository. This ensures that the telemetry logic and hardware-in-the-loop (HIL) configurations I develop are protected yet accessible for the engineering community. ​View ...

Fine-Tuning the GR00T N1.6-3B for Precision Actuation

The Goal: From "Generalist" to "Specialist" ​While the base GR00T N1.6-3B model is a powerful Vision-Language-Action (VLA) foundation, it is trained on diverse humanoid data that doesn't always account for the specific torque curves of NEMA 17 steppers. To achieve sub-millimeter precision in our pallet-handling tasks, we must perform a targeted fine-tuning run using a custom dataset collected from our own hardware. Dataset Preparation: The "Aura-Collect" Method ​High-quality demonstrations are the lifeblood of fine-tuning. For Project Aura, we collected 40 high-fidelity "Success" trajectories. ​Demonstration Quality: We avoided jerky movements and long pauses, as the model will learn those inefficiencies as intentional behaviors. ​Modality Mapping: We updated our modality.json to map the Pi 5's camera stream to the observation.images.main key, ensuring the model's visual transformer identifies the pallet correctly. . Technical Impl...

Project Aura: M2M Operational Pillars

To build a successful M2M business, these four layers must work in harmony. For Project Aura, we have mapped our stack directly to these Industrial IoT (IIoT) standards: ​1. The Device Layer (The "M" in M2M) ​This is the physical hardware capable of sensing and acting. ​Aura Implementation: The Raspberry Pi 5 acting as the primary compute module, interfacing with NEMA 17 actuators via the Sentinel API. ​Key Metric: Hardware availability and MTBF (Mean Time Between Failure). The Connectivity Layer (The "2" in M2M) ​The communication "pipe" that transports data. ​Aura Implementation: Utilizing ROS 2 Jazzy for decentralized messaging and secure TLS-encrypted tunnels for the GCS Cloud Sync we deployed. ​Business Value: Reliability. Without a stable "2," the machine is isolated and the revenue model fails. The Platform/Middleware Layer ​The "Brain" where data is normalized and managed. ​Aura Implementation: Google Cloud Storage (GCS) for dat...

Cloud-Native Telemetry – Syncing ROS 2 Logs to GCP

The Challenge: Edge Data vs. Storage Limits ​Our Project Aura robot generates approximately 150MB of telemetry data per hour of active testing. Relying on the Raspberry Pi 5’s microSD card for long-term storage is a risk—SD cards have limited write cycles and are prone to corruption during power fluctuations. To ensure our N1.6-3B model training data is never lost, we have implemented an automated GCP Cloud Sync pipeline. Architecture: The Cloud-to-Edge Bridge ​The system is designed with security as the priority. We utilize a dedicated Service Account with the "Least Privilege" principle, ensuring the robot can only create objects in its specific bucket, but cannot delete or modify historical data. ​Security Configuration: ​IAM Role: roles/storage.objectCreator ​Authentication: JSON Key-file (Stored in a root-restricted directory). ​Network: Encrypted TLS 1.3 tunnel Implementation: The Python Sync Engine ​We developed a lightweight Python utility that runs as a background pr...

Training the Sentinel – Predictive Maintenance with Vertex AI

The Concept: Beyond Simple Logging Once our Raspberry Pi 5 uploads the NEMA 17 motor telemetry to our GCS Bucket, we don't just let it sit there. We use Vertex AI to identify patterns of "micro-stalls"—tiny drops in torque that a human wouldn't notice, but that indicate a physical gear is about to fail. ​2. Connecting the Bucket to Vertex AI To train our model, we create a Dataset in Vertex AI that points directly to our project-aura-vault/telemetry/ folder. ​The Logic: We use a Time-Series Forecasting model. ​The Goal: To predict the "Remaining Useful Life" (RUL) of our actuators. ​3. The Analysis Script (Cloud-Side) You don't run this on the Pi; you run this in a Vertex AI Notebook. Vertex AI + Sentinel: Telemetry Analytics Our Sentinel API now integrates with Google Cloud Vertex AI to perform real-time failure prediction on motor telemetry logs. import pandas as pd from google.cloud import aiplatform # Initialize Vertex AI aiplatform.init(projec...