Docs: Add image transformation algorithm to lessons_learned.md
This commit is contained in:
parent
7036e466e3
commit
a5e3298bd8
@ -1,37 +1,44 @@
|
||||
# Lessons Learned from Hyperion Grabber Wayland Adaptation
|
||||
## Image Transformation Algorithm (HyperionGrabber)
|
||||
|
||||
This document summarizes key insights relevant to implementing direct WLED communication, derived from analyzing the Hyperion.NG codebase.
|
||||
The `HyperionGrabber` is responsible for acquiring video frames, transforming them, and sending them to the Hyperion server (via `HyperionClient`). The core image transformation logic is found in the `_processFrame` method of `hyperiongrabber.cpp`.
|
||||
|
||||
## 1. Hyperion Image Processing Pipeline (Relevant for WLED Direct Communication)
|
||||
**Algorithm:**
|
||||
|
||||
Understanding Hyperion's image processing is crucial for replicating its functionality in a direct WLED grabber. The pipeline involves several key steps:
|
||||
```pseudocode
|
||||
Function _processFrame(input_frame: QVideoFrame):
|
||||
// 1. Validate the input frame
|
||||
If NOT input_frame.isValid():
|
||||
Return
|
||||
|
||||
* **Image Decoding & Scaling:**
|
||||
* Hyperion decodes base64 encoded image data into `QImage` objects.
|
||||
* It performs image scaling: both user-defined (`data.scale`) and forced scaling if image dimensions exceed `IMAGE_WIDTH_MAX` or `IMAGE_HEIGHT_MAX` (both 2000 pixels).
|
||||
* Images are converted to a raw RGB format (3 bytes per pixel, `QImage::Format_ARGB32_Premultiplied` internally, then extracted as RGB triplets).
|
||||
// 2. Increment frame counter and apply frame skipping if configured
|
||||
_frameCounter_m = _frameCounter_m + 1
|
||||
If _frameskip_m > 0 AND (_frameCounter_m MOD (_frameskip_m + 1) != 0):
|
||||
Return
|
||||
|
||||
* **LED Mapping (`ImageProcessor::process`):**
|
||||
* This is the core step where image pixels are mapped to individual LEDs.
|
||||
* It utilizes a `LedString` object, which defines the geometry and properties of the LED strip.
|
||||
* Each `Led` in the `LedString` has fractional coordinates (`minX_frac`, `maxX_frac`, `minY_frac`, `maxY_frac`) defining the region of the image that contributes to its color.
|
||||
* Various mapping algorithms (e.g., `getMeanLedColor`, `getDominantLedColor`) are applied to calculate a single `ColorRgb` value for each LED from its corresponding image region.
|
||||
// 3. Convert QVideoFrame to QImage
|
||||
image = input_frame.toImage()
|
||||
If image.isNull():
|
||||
Log "Failed to convert QVideoFrame to QImage."
|
||||
Return
|
||||
|
||||
* **Color Adjustment (`ColorAdjustment::applyAdjustment`):**
|
||||
* After the initial LED colors are determined, color adjustments are applied.
|
||||
* This includes transformations like gamma correction, brightness, and contrast.
|
||||
// 4. Calculate target size for scaling based on _scale_m (default 8)
|
||||
target_width = image.width() / _scale_m
|
||||
target_height = image.height() / _scale_m
|
||||
target_size = QSize(target_width, target_height)
|
||||
If NOT target_size.isValid():
|
||||
Log "Invalid target size for scaling."
|
||||
Return
|
||||
|
||||
* **Color Order Correction:**
|
||||
* The final step before sending data to the LED device involves reordering the RGB bytes for each LED.
|
||||
* This is based on the `ColorOrder` specified for each individual LED in the `Led` struct (e.g., RGB, BGR, GRB).
|
||||
// 5. Scale the image using Qt's optimized scaling with smooth transformation
|
||||
scaled_image = image.scaled(target_size, Qt::IgnoreAspectRatio, Qt::SmoothTransformation)
|
||||
|
||||
## 2. LED Layout (`LedString`)
|
||||
// 6. Convert image format to RGB888 if necessary (24-bit RGB)
|
||||
If scaled_image.format() IS NOT QImage::Format_RGB888:
|
||||
scaled_image = scaled_image.convertToFormat(QImage::Format_RGB888)
|
||||
|
||||
* The LED layout is defined by a `QJsonArray` (referred to as the `LEDS` setting in Hyperion's configuration). This array is parsed into a `LedString` object.
|
||||
* The `LedString` object contains a vector of `Led` structs, each specifying the fractional coordinates (`minX_frac`, `maxX_frac`, `minY_frac`, `maxY_frac`) within the image that correspond to that LED, and its specific `ColorOrder`.
|
||||
// 7. Send the processed image dimensions and data to the Hyperion client
|
||||
_hclient_p.setImgSize(scaled_image.width(), scaled_image.height())
|
||||
_hclient_p.sendImage(scaled_image.constBits(), scaled_image.sizeInBytes())
|
||||
|
||||
## 3. WLED Direct Communication Strategy
|
||||
|
||||
* **Goal:** Implement a grabber that directly communicates with a WLED device (at `192.168.178.69`) using the WLED protocol, completely bypassing the Hyperion server.
|
||||
* **LED Layout Input:** The LED layout parameters (defining the `LedString`) will be provided by the user via command-line arguments in the final grabber application. This simplifies the grabber's internal configuration parsing, as it won't need to read complex JSON configuration files for the LED layout.
|
||||
* **Required Implementation:** The grabber will need to replicate the core image processing functionalities observed in Hyperion: image decoding, scaling, LED mapping (using the provided layout), color adjustment, and color order correction. The final processed RGB data will then be sent using the WLED protocol.
|
||||
End Function
|
||||
```
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user