Skip to content

Commit

Permalink
Docs 08282024 (#45)
Browse files Browse the repository at this point in the history
* docs: update multi aruco docs

* docs: updated image overall ach

* fix: Azure launch with --volume

* Update docker/azure-kinect/README.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update docker/azure-kinect/README.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update docker/azure-kinect/README.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update docs/ArUco_detection.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update docs/State_flow.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update src/aruco_pose/README.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update src/aruco_pose/README.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update docs/ArUco_detection.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* Update docs/Robustness_tests.md

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>

* refac: update docs

* refac: doc updates

* refac: update image

* refac: changed images

* refac: adjust readme

---------

Co-authored-by: Dr. Phil Maffettone <43007690+maffettone@users.noreply.github.com>
  • Loading branch information
ChandimaFernando and maffettone authored Sep 30, 2024
1 parent 04df5c1 commit d0ee257
Show file tree
Hide file tree
Showing 15 changed files with 185 additions and 0 deletions.
16 changes: 16 additions & 0 deletions docker/azure-kinect/99-k4a.rules
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@

# Bus 002 Device 116: ID 045e:097a Microsoft Corp. - Generic Superspeed USB Hub
# Bus 001 Device 015: ID 045e:097b Microsoft Corp. - Generic USB Hub
# Bus 002 Device 118: ID 045e:097c Microsoft Corp. - Azure Kinect Depth Camera
# Bus 002 Device 117: ID 045e:097d Microsoft Corp. - Azure Kinect 4K Camera
# Bus 001 Device 016: ID 045e:097e Microsoft Corp. - Azure Kinect Microphone Array

BUS!="usb", ACTION!="add", SUBSYSTEM!=="usb_device", GOTO="k4a_logic_rules_end"

ATTRS{idVendor}=="045e", ATTRS{idProduct}=="097a", MODE="0666", GROUP="plugdev"
ATTRS{idVendor}=="045e", ATTRS{idProduct}=="097b", MODE="0666", GROUP="plugdev"
ATTRS{idVendor}=="045e", ATTRS{idProduct}=="097c", MODE="0666", GROUP="plugdev"
ATTRS{idVendor}=="045e", ATTRS{idProduct}=="097d", MODE="0666", GROUP="plugdev"
ATTRS{idVendor}=="045e", ATTRS{idProduct}=="097e", MODE="0666", GROUP="plugdev"

LABEL="k4a_logic_rules_end"
65 changes: 65 additions & 0 deletions docker/azure-kinect/Dockerfile.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
FROM osrf/ros:humble-desktop

RUN apt-get update \
&& apt-get -y install \
python3-pip \
build-essential \
cmake \
usbutils \
libgtk2.0-dev \
libusb-1.0\
ffmpeg \
mlocate \
wget \
curl \
ros-${ROS_DISTRO}-joint-state-publisher \
build-essential \
python3-colcon-common-extensions \
software-properties-common

# For vnc viewer
RUN apt-get install -y x11vnc xvfb

# Clean up
RUN apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*

RUN curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc
RUN sudo apt-add-repository https://packages.microsoft.com/ubuntu/20.04/prod

RUN curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
RUN sudo apt-add-repository https://packages.microsoft.com/ubuntu/18.04/prod
RUN curl -sSL https://packages.microsoft.com/config/ubuntu/18.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft-prod.list
RUN curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

# || true forces through the errors without breaking the apt-get update
RUN sudo apt-get update || true

# End user license agreement (EULA) parameter is necessary. -y won't trigger an EULA answer
RUN ACCEPT_EULA=Y apt-get install -y libk4a1.4 libk4a1.4-dev

RUN wget mirrors.kernel.org/ubuntu/pool/universe/libs/libsoundio/libsoundio1_1.1.0-1_amd64.deb
RUN sudo dpkg -i libsoundio1_1.1.0-1_amd64.deb
RUN sudo apt install -y k4a-tools

# Copy the rules
COPY 99-k4a.rules /etc/udev/rules.d/99-k4a.rules

RUN pip3 install xacro

RUN mkdir /root/temp_code/
RUN mkdir /root/ws/src -p

WORKDIR /root/temp_code/
RUN git clone https://github.com/ChandimaFernando/erobs.git -b azure-kinect
RUN mv /root/temp_code/erobs/src/kinect_recorder /root/ws/src/

WORKDIR /root/ws/src
RUN git clone https://github.com/microsoft/Azure_Kinect_ROS_Driver.git -b humble

WORKDIR /root/ws/
RUN /bin/bash -c ". /opt/ros/humble/setup.bash && colcon build"

# Set display
ENV DISPLAY :1
18 changes: 18 additions & 0 deletions docker/azure-kinect/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
Build the image
```bash
podman build -t azure-kinect:latest .
```

The following command runs a container with this image without opening up the container network to the host network.
```bash
podman run -it --volume="/dev/bus/usb:/dev/bus/usb" --rm azure-kinect:latest /bin/bash -c "Xvfb :1 -screen 0 2560x1440x16 & . /opt/ros/humble/setup.bash && . /root/ws/install/setup.sh && ros2 launch azure_kinect_ros_driver driver.launch.py"
```

The following command runs a container with this image using the host network as the container network.
```bash
podman run -it --volume="/dev/bus/usb:/dev/bus/usb" --rm --network host --ipc=host --pid=host azure-kinect:latest /bin/bash -c "Xvfb :2 -screen 0 2560x1440x16 & . /opt/ros/humble/setup.bash && . /root/ws/install/setup.sh && ros2 launch azure_kinect_ros_driver driver.launch.py depth_mode:=NFOV_UNBINNED point_cloud_in_depth_frame:=false"
```

In the above, `depth_mode:=NFOV_UNBINNED` sets a narrow field of view for increased distance in depth perception and `point_cloud_in_depth_frame = false` to render the pointcloud onto the RBG image.

A number of parameters with respect to the Kinect camera can be set via driver.launch.py. Refer [this link](https://github.com/microsoft/Azure_Kinect_ROS_Driver/blob/6ffb95a56ee175e5020b5ee5983d7230befbb176/docs/usage.md) for the complete list of options.
17 changes: 17 additions & 0 deletions docs/ArUco_detection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# ArUco Multi-Tag Detection

Camera set up and its basics can be found below in [this README.md file](../src/aruco_pose/README.md)

## Transform Server

When a new tag is detected, it is added to the transform server. They are uniquely identified by their AruCo tag ID. Below image is an example with both the 0 and 150 tag IDs are added to the transform server.

<img src="./images/transform_tree.png" alt="Multi-aruco detection" width="1000">

Custom action-server message `FidPoseControlMsg.action` has an extra field to insert which sample holder tagged by the tag ID needs to be picked.

<img src="./images/select_pick_place.gif" alt="Multi-aruco detection" width="300">

On returning the sample holder, the user has the option to return to a different storage given that the sample holder has previously being pickup up. This ensures that the message type is consistent in both the pickup and return, and all arguments of the message are interpreted. Below is the second part of the above video where the sample holder being return to a different location:

<img src="./images/return_to_another_pickup.gif" alt="Multi-aruco detection" width="300">
30 changes: 30 additions & 0 deletions docs/Robustness_tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Sample Placement

On conducting an exploratory study with the Azure Kinect mounted at a fixed position away from the and level to the table-top, we found that the position of the sample holder has to satisfy two criterion in oder to be successfully picked up by the gripper.
1. The fiducial tag has to be visible to the camera
2. Sample holder has to be facing an angle that is viable for the robotic arm to approach the sample holder face-on.

Following diagram plots angles that satisfies the above two conditions.

<img src="./images/Detection_angles.png" alt="Viable detection angles" width="800">


# Robustness Tests

This document records the results of repeatability tests of the sample place and return.

## Sample pick and place using fiducial markers

In the below image, the sample holder is places at 10cm intervals.

<img src="./images/Robustness_testing_08142024.png" alt="Marker based pick and place" width="800">


Out of 30 placings using the fiducial markers, 24 successes -> 80%.

Failure reasons: At two places that's more than 65cm+ from the camera has a 50-50 marker detection success rate.

## Sample return using fiducial markers

23/24 successful returns.
Failure reason -> sample holder at storage was moved slightly when pick-up stage as the storage is not fixed.
19 changes: 19 additions & 0 deletions docs/State_flow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# State Transition in pdf_beamtime

### Bluesky Mapping

The top level tasks are issued via Bluesky commands which are mapped to a ROS2 action server. Below image maps the Bluesky commands to their respective behavior at the action client side.

<img src="./images/pdf_beamtime_arch_bluesky_logic.png" alt="Bluesky Command Mappings to Actions" width="800">

### Finite State Machine

The finite state machine handles the state transitions for each of the 'goals' sent via the action server.

<img src="./images/pdf_beamtime_arch_overall_fsm.png" alt="Finite State Machine" width="1500">

### Inner State Machine

Each of the states in the Finite State Machine has 6 sub-states. These sub-states detail the fine-grain bahavior in each of the states, and enable the interaction with the Bluesky `RunEngine` [process controls and interruptions](http://blueskyproject.io/bluesky/main/state-machine.html#interruptions). The image below shows the state transition diagram for each sub-state (inner states).

<img src="./images/pdf_beamtime_arch_inner_FSM.png" alt="Finite State Machine" width="800">
Binary file added docs/images/Detection_angles.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/Robustness_testing_08142024.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pdf_beamtime_arch_bluesky_logic.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pdf_beamtime_arch_inner_FSM.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pdf_beamtime_arch_overall_fsm.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/return_to_another_pickup.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/select_pick_place.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/transform_tree.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 20 additions & 0 deletions src/aruco_pose/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Detecting Aruco Markers with Azure Kinect

## Camera Operation:

Run the following command to launch the camera.

```bash
podman run -it --rm --privileged --network host --ipc=host --pid=host azure-kinect:latest /bin/bash -c "Xvfb :2 -screen 0 2560x1440x16 & . /opt/ros/humble/setup.bash && . /root/ws/install/setup.sh && ros2 launch azure_kinect_ros_driver driver.launch.py depth_mode:=NFOV_UNBINNED point_cloud_in_depth_frame:=false"
```

## Camera Parameters

In OpenCV, the distortion coefficients are usually represented as a 1x8 matrix:
Expand All @@ -10,10 +18,22 @@ In OpenCV, the distortion coefficients are usually represented as a 1x8 matrix:

For Azure kinect, they can be found when running the ROS2 camera node and explained in [this link](https://microsoft.github.io/Azure-Kinect-Sensor-SDK/master/structk4a__calibration__intrinsic__parameters__t_1_1__param.html).

## Positioning of the Camera

Parameters ```cam_translation.x```, ```cam_translation.y```, and ```cam_translation.z``` are measures from the base of the robot and in the robot's coordinate world.

Hint: Use the robot arm to determine the x and y axis of the robot by moving the robot along ```x=0``` and ```y=0``` Cartesian lines. Next use a tape measure to measure the distance (projections) from each x and y axises to the camera lens.

## ArUco markers

Complete guide to ArUco markers are in [this link](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html).

## Tag Family

In our work, we use the marker family 'tag36h11', which is a family of tags which are detectable by both ArUco and AprilTag detection. [This link](https://github.com/AprilRobotics/apriltag-imgs/tree/master/tag36h11) points to the AprilTag GitHub repository that hosts pre-generated images.

Prior to detection, printed out tags (of equal size) need to be measured precisely and recorded under the paramater `physical_marker_size`.

## Tag detection
When there are multiple tags present, function `estimatePoseSingleMarkers()` estimates pose of all the detected markers seperatley. At present, we only estimate the pose when only one marker is present, ignoring the cases where no-marker or more than one marker are present.

Expand Down

0 comments on commit d0ee257

Please sign in to comment.