Dear readers,
I got this email earlier today:
“I am registered blind and use a Screen Reader (software) to translate computer output into speech and electronic Braille. The screen reader cannot interpret images. Hence, I can neither read or draw diagrams.
I am going for a job interview next week for a Business Analyst role which will involve producing business process maps … I am therefore investigating text methods that would allow me to input a text description and have the tool churn out the diagram for me. Conversely, I was wondering if I could get a text flow description of a diagram drawn pictorially?”
My first answer has been to point him to the list of textual UML tools but this is not a very satisfactory answer since:
- He is interested in drawing business process models (I guess he refers to BPMN-like models) and not UML ones
- He is also looking for a tool that can describe graphical diagrams and none of the tools in the list (as far as I remember) are focused on this
A quick internet search only points to a this funded European project for Accessible UML which basically failed to deliver the expected results (as they themselves recognized).
Long story short, any advice/comment that can help this blind modeler would be appreciated. Even beyond this specific question that has motivated the post, I do believe that this is a very interesting (research) problem that deserves some thorough thinking. Therefore, any input (even if it does not directly answers the original question) will be very appreciated. Thanks for your help!!
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
This is very tricky primarily because the distinct advantage UML has over a textual description is it’s visual aspects.
With a visual model all the information is presented concurrently and persistently, meaning the reader doesn’t have to remember it and can digest it using the most appropriate path for their purpose and/or personal style.
With a spoke textual description, each statement is transitory and reads sequentially, this means the reader has to remember each statement and can only analyse backwards (i.e. make connections between the current statement and previously remembered ones)
– it is possible to read multiple sequences to illustrate multiple dimensions but again this required the listener to remember and combine items that appear in multiple sequences (something the brain isn’t very good at). if I read out 3 lists each with a common element i.e. A,B,C & D,B,E & F,B,G although this is only 7 distinct elements it is 9 instances which is towards the upper limit of human comprehension.
There’s no direct feedback loop. If the Analyst can hold a mental picture of the BPM (I’m assuming he/she hasn’t been blind from birth) then it’s feasible for them to dictate this via spoken language which is then translated into a visual model, however as they are unable to see this model, the only way for them to validate it is correct is by having it translated back to a spoken description. Unfortunately all this would prove is that that translation from text to visual model and back is consistent. There would be no way for the Analyst to ever know how accurately the transcribed visual mode matched their mental model.
I would suggest that since the Analysts has to work in a textual domain, that all parties involved work in a textual domain. In order to do this it would be important to work out which elements of a BPM (when described in text) are important and which are not, although this is probably true for graphical representations also.
My only “idea” might be to use some 3D printers at least to make UML diagrams understanble to blind people.
This topic sounds very interesting to me.
Usually there is much fuss about things like representation or transformation etc of models. These are very important points, however imho the most important point of them all is how models are ‘processed’ by humans. Then suddenly issues arise, like the ones James Towers pointed out:
“… digest it using the most appropriate path for their purpose and/or personal style.”
“… it would be important to work out which elements of a BPM (when described in text) are important and which are not, although this is probably true for graphical representations also”
I strongly agree to these and lots of the other points. They are important, not only for the blind, but also for ‘normal’ model readers. That’s why I think this topic is worth deeper scientific analysis.
|=
ps
“Human Centric Modelling”
Two of my family members are registered blind and deaf, and I’ve written some software for them over the years. It’s a fascinating, at times frustrating, but ultimately rewarding experience to try to figure out a decent UI. Often, the result is very different from a simple conversion of existing software to a Braille terminal (generally a single row of 40 or 80 read-only Braille characters) or speech.
If I was working with a totally blind analyst, I think I would make a wooden block for each class, labelled in Braille. There could be different shaped blocks for different UML types. The relationships between them would be more spacial than explicit, e.g. inheritance vertically and aggregation horizontally. This would be quite close to blind chess – a chess board is full of relationships, and blind people are good at remembering the state of play and relationships, feeling to remind themselves. If the relationships needed to be explicit, it might be worth adding them in different kinds of string, wool etc. for different types of relationship, labelled in Braille where necessary.
My guess is that we could work together with this and build up a shared understanding to a level similar to that of sighted people on a whiteboard (personally, I’d actually prefer some elements of this to the whiteboard, particularly ease of moving around). We could take photos of the layout or I could duplicate it in a UML tool. When the model became more frozen, we could print it out on a Braille printer – looks like there are “high resolution” ones that can do a reasonable combination of “gray-scale” dot-matrix graphics plus text as Braille.
If this was less of a collaborative effort, and/or the analyst preferred text to spatial graphics, I think coming up with a textual DSL for the particular domain (i.e. not textual UML) might be the best approach. A screen reader could work, but my guess is that a Braille display would be more effective. I suppose it comes down to auditory vs. tactile / spatial. Having watched a blind programmer use a Braille display, it seemed much more similar to sighted programming than watching my relatives trying to make sense of non-sequential text with a speech reader. Braille displays and printers are, however, very expensive compared to speech systems.
Good luck with the interview, and remember that if people are averse to trying new ways of building systems, it’s not a reaction to your blindness – trying to persuade people to move from text to graphics or UML to DSL is equally hard work, it’s all just a question of comfort zones and wanting to stick to what we’re used to.
Via Michel Chaudron and Eelke Folmer some links about research papers exploring this same issue:
http://iacis.org/iis/2006/Brookshire.pdf
http://opax.swin.edu.au/~jhamlynharris/Papers/OZeWAI20031.html
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.6892&rep=rep1&type=pdf
Hey
have the same query, have their been any new software in the past year that will be able to assist?
Hi All.
I’ve just come across this question whilst trying to gage the interest in a series of articles I’ve been thinking about writing.
I’m a blind software engineer living and working in the UK. For the last 6 years I’ve been part of software development teams which use a MDG way of working. Both teams have used off the shelf UML modelling tools, namely Enterprise Architect and IBM Rational Rhapsody.
I’ve come up with a number of methods and techniques for modelling software in UML using these tools. Using my screen reader, so spoken text and electronic braille, I’m able to both read and write UML diagrams. I regularly write/read Use Case, State machine, sequence and class diagrams.
It’s becoming pretty clear to me that some posts on how I do this would be useful as this seems to be a question that comes up a bit.
The reason I’m posting this is that I wanted to let you know that using access tech UML can be possible, if not easy and that with a fair bit of persistence its possible for a blind person to get involved with software design. It’s not completely natural for someone that cannot see to use UML, in my own personal projects I use textual descriptions however UML in the work place is becoming important and not being able to use it would make a blind software dev less employable.
One thing I am very keen about is using the same tools as my sighted team mates and I generally try to stay away from tools that don’t allow me to do this so I would highly suggest that if any research and development is done in this area that rather than develop solutions which are standalone a better option is to add plug ins or extensions to existing tools.
Hi Nick,
A possible workaround may be to use a textual UML notation. There are quite a few tools that allow to define / read UML models as textual descriptions (and then automatically render them as graphics). A list of textual uml tools can be read here: https://modeling-languages.com/uml-tools/#textual
This post may be a bit late for the question, but I am currently working on this very problem right now as an undergraduate research project. I have a script that can readily convert a UML diagram into a 3D model with a 3D printer right now but I am running into a few issues that I could use feedback on.
-First, I would really like to know the actual scope of the problem. That is to say, are there a lot of visually impaired people working in fields that use UML diagramming? There are a few directions that I could take this project, but some might not be worth the trouble if this isn’t a fairly common issue.
-Second, especially directed towards visually impaired persons, what would you find the easiest method for conveying the textual descriptions found on the diagrams? I have tried several different methods but could use feedback on what existing systems, i.e. audio or braille, are the most helpful.
I appreciate any feedback!
I’ve spent a lot of time working with a blind graduate student in computer science. We discarded all tactile approaches because of the time to internalize the graphed structures. A key observation is the fact that, if you treat {} delimiters as boxes in the UML, then most of what remains is text. Since my students was already used to Java keywords anyway, and found that the JAWS reader vocalized symbols like -> for an array in a noisey way, we went to Java-like keywords for the relations, e.g., extends, implements, uses. We are in the dmist of drafting a conference paper now, would be glad to share the paper & Python software in summer 2017, after everything stabilizes. [email protected]
Great work!
Just one question. I am sure you have tried the Human Usable Textual Notation (HUTN) for models:
– http://www.omg.org/spec/HUTN/
– http://www.eclipse.org/epsilon/doc/hutn/
Why it didn’t work?
Thanks,
Antonio
I’m very interested in it! Please let us know when it’s available!