Discussion about this post

User's avatar
Bill Murvihill's avatar

Hi Benjie, I hope you've been great!

One approach to enabling robots is the video training approach for AI/Neural Net systems. In the case of self driving cars, this amounts to '1 million hours' of training video fed into the training supercluster and then outputting the resulting parameters to the inference engines.

If instead we're training a robot to do the laundry, I see the entire chore in it's full activity:

- Roam the house looking for the kid's socks

- Gather the dirty clothes from baskets & hampers

- Sort the laundry into hot/warm/cold loads, per the desires of the owner

- Load the washer

- Add detergent. liquid or capsule

- Set the cycle parameters, per make/model of washer and owner-desired settings

- Start the load

- Promote the load to the dryer when finished.

- Set the dryer cycle parameters, per make/model and owner-desired settings

- Remove clothes when dryer completes

- Empty lint screen

- fold clothes and sort into owner

- Replace clothes, hang on hanger, place into drawer, based on each owner's desired placements

Then, the fun begins...

- Handle exceptions like, load out of balance, power outage interrupt, washer starts leaking, ...

My question, (thanks for reading through all that!), do you think it would require more, less or the same amount of training video as self driving cars to enable this complete approach to 'doing the laundry', so that when a robot owner purchases the 'do the laundry' ability, the parameters can be downloaded to their robot and then never have to care about the laundry again?

Expand full comment

No posts