Renewed Agency as #Abilities meets Internet of Things (IoT), Messaging, 3d Printing and Optical Markers
13 min readMar 14, 2024
Daunting #Abilities challenges are met with new technologies (photo by Smart Animal Training and

Daily challenges faced by those with disabilities are daunting; technologies we take for granted are hard to fathom (e.g., smart phones, apps, buttons, speech, vision, mobility.) This lack of “agency” — doing it yourself — is frustrating. The result is a large population segment that could use help. There is a new push in a segment nicely called #Abilities — where new technical concepts help users regain “agency” through renewed Abilities.

21st century technologies can help. The Internet of Things (IoT) is a new paradigm with lots of small devices that sense their environment and through messaging, collaborate between users and other IoT devices. Maybe steering a wheelchair by eye tracking, rewarding your service dog via an IoT dog feeder, using motion or vision sensing to perform operations. Smart!

#Abilities needs the mind set of Trinity (from the Matrix movie) — when asked if she can fly a nearby Helicopter. She responds with “not yet” and calls her remote agent, Tank, to download that flying knowledge. The result is renewed Agency.

Internet of Things (IoT) in the remote Artic Circle without WiFI
Internet of Things (IoT) above the remote Alaska Arctic Circle talking with Bluetooth without WiFI. Authors photo at 10pm on longest day.

Briefcase full of Things: Internet of Things

This paper will discuss how a Briefcase full of Things — the Internet of Things (IoT) — has a unique opportunity to bring back agency to the #Abilities market, especially when thinking outside the box (like a 3d printed box.)

Briefcase full of things: Internet of Things (IoT) — Controllers and Sensors (from Authors collection)

This paper will also introduce our Semantic Marker™️ concept which especially leverages the IoT framework discussed here.

#Abilities Challenges (touching, selecting, sensing)

The Abilities challenges are daunting! Touching buttons whether physical or on a smart phone are hard. You’ve seen images of Steven Hawkins in a wheel chair using voice commands, others blowing in a straw to perform operations, using their head to bump into a “button”, or hand waving for motion sensing. Eye Tracking is a new approach where a camera tracks the users eyes and they can then type entire email messages or steer the wheelchair.

Wheelchairs are a major #Abilities segment but other areas are important. Even non-technical users have trouble touching or swipting web buttons, or just changing the channel on a new TV. Many are using a non 21st century phone (eg. flip phones with big buttons are normal.) Thus “apps” are rarely used (good luck ordering a Uber ride-share.) Any change in routines are hard. The result is frustration and limited agency.

Stroke victim’s have a hard time using the phone, tactical feedback and screen swiping are hard. All this reduces their communication skills and reduces their agency. — Laurence DeShields — ER Doctor

The concept of a remote agent or handler is important — and made possible with this IoT framework. Just like Trinity’s remote agent Tank, setting up an environment for success is valuable. Traditionally a helper programs all the things — creating a routine (like managing the Uber ride-share.) But the potential for the user to do this programming themselves is being explored. Thus common tasks can be stored and re-played by the users themselves. Various IoT options help with the re-playing. Even pointing at an optical visual marker can trigger these operations (see Semantic Marker™️ and PAT section below.)

A new generation of ESP-32 based Internet of Things (IoT) devices and sensors, when connected through a robust global messaging framework is a powerful foundation supporting #Abilities. Then wrapping the IoT devices in unique 3d printed models — opens up new and unique agency potential. Remote control by remote agents is also possible with the messaging framework described below.

ESP-32 IoT Sensors and Controllers

Collection of ESP-32 IoT devices from the Author and Smart Animal Training (e.g., Remote IoT Dog Feeder) NOTE: the lids are off feeders to see results of test messages.

The M5 Stack company is leading in making and packaging a new suite of IoT devices based on the ESP-32 chip and Arduino development environment. 100’s of devices, sensors and controllers are available ranging in price from $3 US to $20 and more. Combined with 3d printing (described later) unique packaging and deployment is possible (such as special wheel chair mounts for the dog feeder shown above.)

The ESP-32 chip is a game changer by sharing a radio between Bluetooth and WiFI, providing a dual core, real-time OS and enough memory — in the size of a quarter. This means that every individual IoT device can now work off the grid using Bluetooth Low Energy (BLE) — but when a WiFi network is added, the new potential is for world wide connectivity down to every edge node. Coding commonly uses a mix of C and C++ in a very high level component architecture (versus traditional low level device coding.)

IoT sensors monitor the environment and report their findings (via controllers) through messages (ie JSON over MQTT or Bluetooth.) They can sense sounds, vibrations, accelerometers (tilt device), temperature, moisture, humidity, motion sensing, distance to objects, IR, RF, RFID, touch of buttons, spoken words, gestures, infrared camera, GPS location, light, dark, radio strength RSSI, eye tracking. And of recent seeing and precisely decoding 2d optical vision markers (e.g., barcodes and QR codes) — more on that later using my Semantic Marker™️ and the PAT (Point at Things) section below.

IoT controllers are local edge nodes that interact with the sensors (internal or physically connected and reading the sensed values.) These processors can make decisions (too hot, too close, to wet, too dark, just right) for when to wrap relevant values to send via messages to interested parties ( using JSON over MQTT), and how to display values (on potentially resource constrained screens), use sounds (words or beeps), vibrating haptics or Augmented Reality (AR) Graphic overlays.

IoT controller (M5) with Motion Sensor and Touch Button physically connected with GROVE wires. Display has limited space. 3d button wraps red button for easier touch.

More powerful devices, such as smart phones, tablets and watches, can also contribute as controllers and sensors, while enjoying more screen real estate, better camera sensing and augmented reality (AR) graphic feedback (described below.) This functionality is captured in apps. These too can message over Bluetooth and MQTT, while adding HTTP (web), email, texting, FaceTime, Vision Goggles, etc.

Message Framework (BLE, WiFi, MQTT, HTTP, RPC, web-sockets, etc)

Messages are everything! Without messaging IoT things are almost useless or at least isolated. This communication, hopefully in real-time is accomplished with some kind of transmission medium (radio wave, CB, Wireless, WiFi, BLE, Bluetooth, Smoke Signals, ethernet wires, Optical Markers, Gestures, etc.) Aside from smoke signals, as early as 1799 an Optical Telegraph was created for Napoleon. This could send (relay) information over long distances, but still a coding language was needed. Morris Code was 30 years later and languages that can be accurately processed continue to evolve.

Today, with a renewed data centric focus, we code data values using a language format like JSON. They are sent over a communication medium, we collectively call messaging. We send a message to an object, and that message is written in a known language.

The essence of what sets Computer Science apart is the ability to concisely processes messages (or code) controlling the “utter generality of bits” — as Google’s Vint Cerf says. In particular the compiler is the main Computer Science tool that translates language (messages or code) into executable bits, over and over without ambiguity.

I could go on with the compiler discussion, believe me, but for this writeup, the compiler is the tool critical for parsing and processing the language constructs, and implementing the concepts sent in the messages. This is the power of Computer Science: the efficient translation of a structured language into an executable program.

Application of language translation leads to efficient IoT messaging.

IoT Messaging, MQTT Namespace and User Storage Locations (in the cloud)

The diagram shows how elements of the messaging connect from edge devices to the cloud services. Messaging is encapsulated in the following: Language, Transport, Security, and Application Hooks.

  • Message Language Design (JSON, BNF, IDL, SQL, #hashtags)
  • Message Transport Capabilities (MQTT, Bluetooth, WebSocket, RPC, DDS, CORBA, Telegram, Discourse, Twitter, Smoke Signals, Telegraph, email, HTTP, WiFi)
  • Message Security (transport encryption, passwords, tokens, user accounts, namespaces, SSL, location)
  • Application Hooks for support of messages (e.g., publish & subscribe, REST and node.js)

The resource constrained IoT devices at the edge are mainly using MQTT and Bluetooth BLE many with a JSON format. Every time they start, they try reconnecting with know credentials. When working, all the devices are connected and collaborating with the rest of the collective.

Bluetooth BLE

The power of Bluetooth Low Energy (BLE) is that the IoT devices can act as both senders and receivers of small messages. While they can have many connections, the management complexity has our version staying one for one. BLE doesn’t need WiFi and in well defined environments, devices can automatically connect with the first device they find. Assuming only one device (like a IoT Dog Feeder) and one controller (the red M5 smart clicker shown above) — they can connect and talk to each other instantly (and stay connected while in range.) — note that security is complicated and usually non-existent (much like your spouse connecting over your bluetooth speaker while you are using it.)

MQTT pub/sub require a constant runtime presence (and WiFi)

Subscribing for information on a password protected topic requires a constant running presence (basically so messages can be received.) This requires a runtime library which general web pages don’t support. But almost all apps and the IoT edge devices support the MQTT protocol. As mentioned above, the ability to talk to any edge node in the world requires them to be listening, subscribing, and a message language designed to find a route to every edge node (and why every device needs an addressable name discussed below.) User protected topics are how our information is kept separate unless group topics are shared for party-line chatting.

But first every edge device needs a WiFi network as MQTT is basically a tunnel through WiFi — and timing is important (as WiFi needs to be connected before MQTT can be connected.)

Login Credentials

To get Wifi running is more complicated, as it needs the credentials of the network. These are the SSID and passwords you enter at a new facility (or your device remembers.) On top of that, the MQTT publish subscribe system also require login credentials. Thus BLE is a good way to provide these credentials assuming you have an app that can pass upwards of 100’s of characters.

The other method is using what’s called an Access Point (AP) — where the ESP-32 IoT device turns into a web server and hosts its own WiFi network. You use that WiFi and the AP brings up a web page — served by the IoT device. When done, the device stores those values and reboots.

Credentials using the JSON approach might look as follows:

"SSIDpass": "wifiPassword",
"username": "MQTT username",
"password": "MQTT Password",
"device": "DeviceName"}

Processing would first connect to the WiFi, and then connect to the MQTT server and start receiving and sending messages.

Web Calls can run almost everywhere

The power of the HTTP web protocol is that almost any browser or device can call a web page. These GET and POST protocols send information to a known address represented by a server processing the HTTP messages.

Node-red is a valuable web server approach acting as endpoints for web calls. Since this constantly is running, it stores user state, and it can also communicate with other processes — in particular the MQTT messaging substrate. Thus a web call for feed, as shown below, would have the username and password added as parameters (on the URL.) The node-red processing (shown graphically) would take those parameters and send the feed message over the MQTT network addressed to the device specified.

node-red endpoint for a “feed” command passing the username, password and device, authenticating, then publishing over MQTT

This is securely called via a URL as follows (to a fictional server.)

Internally to node-red, this is authorized and then sent through the MQTT messaging framework (we are using mosquitto.) The round trip from the web call, to the cloud, and then back down to the devices at the world's edge is within a second (assuming all have fast networks.) Impressive!

The map below is a (super-user) real-time update after a message was sent to all our IoT devices running world wide. Their status and general location (city) are returned and displayed — in real-time (if the user opt’d in.) This is in gearing up for a global “feed” at the same time event in the future!

The super-user only real-time map shows some of the IoT feeders deployed world wide (real or simulated based on optionally user supplied city names.)

Web Pages wrap all the messaging functionality

Once messages have been designed and coded into all the IoT devices, wrapping in a custom made and authenticated web page is powerful. A web page, as partially shown below, can be run on any device even a watch. Subsets of these messages can be extracted into smaller pages as well (maybe just the feed command.)

Smart Web Page to send messages (such as feed) to the device (here named Francis). See map.

It’s amazing how must javascript, html and css it takes to make this powerful message passing capability.

Names for everything

Sending a message to a particular object requires a way to specify that object. Here naming is invaluable; everything needs a distinguishing name. All the IoT devices shown in these images have a name. Sending messages through MQTT can either use fine grained topics, or do what many IP multi-cast approaches use: encode the naming in the messages.

Of particular importance is when multiple devices are running; there needs to be ways to distinguish among them. “Siri, turn on the living room light” is complicated when there are lots of lights and switches.

Which light is which? Names would help. “Siri, turn on light #3, counting from left, or from right in some countries languages”

PAT -Point at Things (Optical Vision Markers)

With the messaging foundation described, the ability to trigger devices through a URL, and IoT devices listening for “trigger” commands — offers a new paradigm of invocation — one particularly useful for the #Abilities market. We have a new concept called a Semantic Marker™ which are two-dimensional (2d) optical code images and barcodes, and when scanned or run, they come alive through interactive feedback with the environment; continual scanning will send secure internet messages to IoT devices and even show unique interactive Augmented Reality graphic overlays based on contextually and semantically relevant information (can you say Minority Report!)

PAT — Point at Things (scanning Semantic Marker™️ to trigger IoT Things through messaging)

One can just Point at Things — or PAT, and the correct IoT feeder is triggered, or the right light turned on. Using a Semantic Marker™ aware scanner (like the ones above and below) or a smart phone app, optical visual markers can be scanned and messages sent to the devices encoded in the image.

PAT Scanner from — 3d printed custom case for IoT parts (Authors pic)

Semantic Marker™️

The Semantic Marker™ is a special optical visual marker including two-dimensional (2d) optical code images and barcodes. More on this in a follow on article (and please visit

Aside from triggering IoT devices, scanning optical markers can also be part of renewed agency by linking together or grouping common tasks. We call this a Vision Linker where operations, described with a Semantic Marker™️, are collected (linked) for future use. So scan the actions (like feed the dog, turn the tv channel, or multi-step actions) — and store them away for use in the future. #Abilities programming!

The 2d Optical Vision Marker — the Semantic Marker™️— are the new hieroglyphics (but ones we know how to decode and leverage through messaging)

Even the photo bookmark aspect (the Avatar and Semantic Address) is valuable as shown below. They can be as simple as a link to a folder in an on-line photo site, or do all the IoT operations mentioned here. The default Augmented Reality (AR) feedback is precisely that same Avatar (but overlaying the entire marker.)

2d Optical Markers with a graphic Avatar are a key aspect of a Semantic Marker.™️, helping distinguish among potentially 1000’s of markers.

Augmented Reality (AR)

The Semantic Marker™️ is the anchor for creating a new user interface interaction. With just two examples below, the smart watch shows the 2d optical vision marker, and then switches to the graphic image behind the marker. This is done both with an Augmented Reality (AR) graphic overlay, or just substituting the image with same AR graphic rendition (either static or dynamic grabbing MQTT values like the 1 million shown.) We call these the QRAvatar behind the Semantic Marker™️.

Authors app for the smart watch showing Augmented Reality (AR) image of number of global feedings over WiFi to the ESP-32 dog feeding IoT devices (1 Million and counting). Clicking feed button triggers IoT feed

The IoT devices like the M5 shown below, can also create and show a Semantic Marker™️. These can be dynamically updated and with the right decoder (scanner) the meaning and values behind the optical marker can be displayed (and constantly updated.)

(left) Semantic Marker™️ displayed on M5 IoT Device, constantly changing. (right) Using my iOS App, this is decoded and displayed using Augmented Reality (AR) graphic overlays. A non app aware tool would just take you to a web page (try it)

With this hands-off displaying of valuable information, a unique new form of user interface is possible, one especially suited to the #Abilities market. The image below shows this where a “trigger” message to the dog feeder just by scanning (not touching) the optical marker — again, regaining agency. This version provides haptic tactical feedback when the scanning is matching (and here an Augmented Reality dog image is shown — Scooby Doo.)

A non technical user doesn’t even have to touch the iphone, with PAT it automatically triggers the dog feeder and provides haptic feedback.

3D Printing and not afraid to use it

Finally pulling all these together and deploying will help the #Abilities users. With the right skills (thanks Orion), designing 3d models and printing with 3d printers is a powerful capability. The sky’s the limit, from unique mounts for the various wheelchairs, new easier to touch buttons, or unique containers for any of the IoT devices.

3d printed Dog Feeder with custom wheelchair mount (on right) and embeded ESP-32 controller (listening over BLE and MQTT). (Product called PetTutor from Smart Animal Training and iDogWatch).

Putting it all together for #Abilities

The elements discussed offer a powerful capability for anyone, but can especially help bring back agency to the #Abilities market.

The IoT devices are almost tailor made for hands-off use. For example the motion sensor lets a user wave their hand to trigger a device (feed the dog, turn on the lights, turn the wheelchair, etc). A proximity sensor can let the user know about close objects (maybe sounding off or vibrating.) The potential is unlimited.

When combined with the IoT messaging framework outlined above, new imagined interaction is possible between the IoT sensors, IoT controllers, the users, trusted remote agents and others. Wrap in custom made 3d printed containers, and it’s an exciting new opportunity.

Dog Feeder connected to a Wheelchair using a custom 3d printed mount (like pic above). This one is controlled through voice commands — “saying” numbers (1–10) that map to the iphone app buttons on each screen

Start thinking outside the box and dream up possible solutions using the concepts described here. Drop me a note especially if you are interested in trying this framework as a client. More information at my links.



Computer Scientist and futuristic IoT app developer; KnowledgeShark is my next gen technology navigator; Distributed Computing historian. since 1979.