0% found this document useful (0 votes)
58 views90 pages

User Interface Design: Purbanchal University: Bit Iii Semester

This document provides an overview of visual programming languages and .NET. It begins with definitions of command line interfaces and graphical user interfaces. It then discusses the basic components of graphical user interfaces like pointers, icons, windows and menus. Next, it defines visual programming as a programming language that uses visual representations like graphics and defines key aspects of visual programming languages. It concludes by describing Microsoft Visual Programming Language (VPL) as a graphical dataflow-based programming environment where the program is like a series of workers performing tasks as materials arrive rather than sequential commands.

Uploaded by

rupak dangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views90 pages

User Interface Design: Purbanchal University: Bit Iii Semester

This document provides an overview of visual programming languages and .NET. It begins with definitions of command line interfaces and graphical user interfaces. It then discusses the basic components of graphical user interfaces like pointers, icons, windows and menus. Next, it defines visual programming as a programming language that uses visual representations like graphics and defines key aspects of visual programming languages. It concludes by describing Microsoft Visual Programming Language (VPL) as a graphical dataflow-based programming environment where the program is like a series of workers performing tasks as materials arrive rather than sequential commands.

Uploaded by

rupak dangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Visual Programming Language and .

NET BIT III

 
 
 
User  Interface  Design  
PURBANCHAL  UNIVERSITY:  BIT  III  SEMESTER  
 

sarojpandey.com.np     1  of  90  


Visual Programming Language and .NET BIT III

 
-­‐  SOURCE  -­‐  
The  Essentials  of  User  Interface  Design.  

ALAN    COOPER  
   

Wiley  India  Pvt.  Ltd.  


www.wileyindia.com  

sarojpandey.com.np     2  of  90  


Visual Programming Language and .NET BIT III

1.  INTRODUCTION  
Character-­‐based  System/Command  Line  Interface  
A   CLI   (command   line   interface)   is   a   user   interface   to   a   computer's   operating   system   or   an  
application  in  which  the  user  responds  to  a  visual  prompt  by  typing  in  a  command  on  a  specified  
line,  receives  a  response  back  from  the  system,  and  then  enters  another  command,  and  so  forth.  The  
MS-­‐DOS   Prompt   application   in   a   Windows   operating   system   is   an   example   of   the   provision   of   a  
command   line   interface.   Today,   most   users   prefer   the   graphical   user   interface   (GUI)   offered   by  
Windows,   Mac   OS   and   others.   Typically,   most   of   today's   UNIX-­‐based   systems   offer   both   a   command  
line  interface  and  a  graphical  user  interface.  
 
A  command-­‐line  interface  (CLI)  is  a  mechanism  for  interacting  with  a  computer  operating  system  
or  software  by  typing  commands  to  perform  specific  tasks.  This  text-­‐only  interface  contrasts  with  
the  use  of  a  mouse  pointer  with  a  graphical  user  interface  (GUI)  to  click  on  options,  or  menus  on  a  
text  user  interface  (TUI)  to  select  options.  This  method  of  instructing  a  computer  to  perform  a  given  
task   is   referred   to   as   "entering"   a   command:   the   system   waits   for   the   user   to   conclude   the  
submitting  of  the  text  command  by  pressing  the  "Enter"  key  (a  descendant  of  the  "carriage  return"  
key  of  a  typewriter  keyboard).  A  command-­‐line  interpreter  then  receives,  parses,  and  executes  the  
requested   user   command.   The   command-­‐line   interpreter   may   be   run   in   a   text   terminal   or   in   a  
terminal  emulator  window  as  a  remote  shell  client  such  as  PuTTY.  Upon  completion,  the  command  
usually  returns  output  to  the  user  in  the  form  of  text  lines  on  the  CLI.  This  output  may  be  an  answer  
if  the  command  was  a  question,  or  otherwise  a  summary  of  the  operation.  
 
 
Graphical  User  Interfaces  
A   program   interface   that   takes   advantage   of   the   computer's   graphics   capabilities   to   make   the  
program   easier   to   use.   Well-­‐designed   graphical   user   interfaces   can   free   the   user   from   learning  
complex  command  languages.  On  the  other  hand,  many  users  find  that  they  work  more  effectively  
with  a  command-­‐driven  interface,  especially  if  they  already  know  the  command  language.  
 
Graphical  user  interfaces,  such  as  Microsoft  Windows  and  the  one  used  by  the  Apple  Macintosh,  
feature  the  following  basic  components:  
 
Pointer:   A   symbol   that   appears   on   the   display   screen   and   that   you   move   to   select   objects   and  
commands.   Usually,   the   pointer   appears   as   a   small   angled   arrow.   Text-­‐processing   applications,  
however,  use  an  I-­‐beam  pointer  that  is  shaped  like  a  capital  I.  

sarojpandey.com.np     3  of  90  


Visual Programming Language and .NET BIT III

Pointing  device:  A  device,  such  as  a  mouse  or  trackball,  that  enables  you  to  select  objects  on  the  
display  screen.  
 
Icons:   Small   pictures   that   represent   commands,   files,   or   windows.   By   moving   the   pointer   to   the  
icon  and  pressing  a  mouse  button,  you  can  execute  a  command  or  convert  the  icon  into  a  window.  
You  can  also  move  the  icons  around  the  display  screen  as  if  they  were  real  objects  on  your  desk.  
 
Desktop:   The   area   on   the   display   screen   where   icons   are   grouped   is   often   referred   to   as   the  
desktop  because  the  icons  are  intended  to  represent  real  objects  on  a  real  desktop.  
 
Windows:  You  can  divide  the  screen  into  different  areas.  In  each  window,  you  can  run  a  different  
program  or  display  a  different  file.  You  can  move  windows  around  the  display  screen,  and  change  
their  shape  and  size  at  will.  
 
Menus:  Most  graphical  user  interfaces  let  you  execute  commands  by  selecting  a  choice  from  a  
menu.  
 
Visual  Programming  
-­‐   A   programming   language   that   uses   a   visual   representation   (such   as   graphics,   drawings,   animation  
or  icons,  partially  or  completely)  
− A   visual   language   manipulates   visual   information   or   supports   visual   interaction,   or   allows  
programming  with  visual  expressions.  

-­‐   Any   system   where   the   user   writes   a   program   using   two   or   more   dimensions.  
-­‐   A   visual   language   is   a   set   of   spatial   arrangements   of   text-­‐graphic   symbols   with   a   semantic  
interpretation  that  is  used  in  carrying  out  communication  actions  in  the  world.  

Microsoft   Visual   Programming   Language   (VPL)   is   an   application   development   environment  


designed   on   a   graphical   dataflow-­‐based   programming   model.   Rather   than   series   of   imperative  
commands   sequentially   executed,   a   dataflow   program   is   more   like   a   series   of   workers   on   an  
assembly  line,  who  do  their  assigned  task  as  the  materials  arrive.  As  a  result  VPL  is  well  suited  to  
programming  a  variety  of  concurrent  or  distributed  processing  scenarios.  
 
VPL   is   targeted   for   beginner   programmers   with   a   basic   understanding   of   concepts   like   variables  
and  logic.  However,  VPL  is  not  limited  to  novices.  The  programming  language  may  appeal  to  more  

sarojpandey.com.np     4  of  90  


Visual Programming Language and .NET BIT III

advanced  programmers  for  rapid  prototyping  or  code  development.  As  a  result,  VPL  may  appeal  to  
a   wide   audience   of   users   including   students,   enthusiasts/hobbyists,   as   well   as   possibly   web  
developers  and  professional  programmers.  

Visual  Interface  Components  


Major  Visual  Components  
Window,  Controls  (Button,  Edit  Box,  Check  Boxes,  Radio  Button  List  box,  combo  box,  image  
list  and  tree  view  ,  dialogue  boxes,  Menu  and  Icons,  Scrollbar,  Tool  Bar,  Status  Bar)  etc.  
 
Event  Driven  Programming  
Event-­‐driven   programming   or   event-­‐based   programming   is   a   programming   paradigm  
in   which   the   flow   of   the   program   is   determined   by   events—i.e.   Sensor   outputs   or   user  
actions  (mouse  clicks,  key  presses)  or  messages  from  other  programs  or  threads.  
 
Event-­‐driven  programming  can  also  be  defined  as  an  application  architecture  technique  in  
which   the   application   has   a   main   loop   which   is   clearly   divided   down   to   two   sections:   the  
first  is  event  selection  (or  event  detection),  and  the  second  is  event  handling.  In  embedded  

systems   the   same   may   be   achieved   using   interrupts   instead   of   a   constantly   running   main  
loop;  in  that  case  the  former  portion  of  the  architecture  resides  completely  in  hardware.  

Event-­‐driven   programs   can   be   written   in   any   language,   although   the   task   is   easier   in  
languages   that   provide   high-­‐level   abstractions,   such   as   closures.   Some   integrated  
development   environments   provide   code   generation   assistants   that   automate   the   most  
repetitive  tasks  required  for  event  handling.  

sarojpandey.com.np     5  of  90  


Visual Programming Language and .NET BIT III

Typical  features  of  events  in  event-­‐driven  programming  


ñ Events  are  generally  referenced,  indexed,  or  named  based  on  an  object  (noun)  and  the  type  
of   action   that   triggered   the   event.   For   example,   "icon3_click"   or   "icon3  -­‐   onClick".   Thus,   they  
usually  have  at  least  two  "keys"  (in  an  informal  sense).  

ñ There   is   usually   a   way   to   wild-­‐card   the   above   such   that   one   can   optionally   react-­‐on   or  
intercept  say  every  click  regardless  of  widget  or  every  event  of  a  given  widget.  

ñ Often   there   is   an   object,   parameter   set,   or   dictionary   array   structure   that   is   passed   in   as  
parameter   that   can   be   used   to   find   out   more   about   the   environment   that   triggered   the  
event.  For  example,  it  may  contain  the  keystroke  or  mouse  coordinates  at  the  time  of  event  
triggering.  

ñ Events   often   return   a   status   indicator   as   to   whether   the   event   was   successful   or   not.   For  
example,   an   "onValidate"   event   may   return   ‘True’   if   a   form   passed   validation   or   False   if   it  
did   not.   Another   approach   is   to   return   a   string   or   structure   containing   the   error   message(s)  
if   there   was   a   problem   detected.   An   alternative   is   an   AddError()   API   operation.   Not   all  
events  need  to  return  results.    

ñ Events  can  often  "talk"  to  a  declarative  state  framework  and/or  database.  For  example,  in  a  
GUI   an   event   may   be   able   to   change   the   colors   or   text   of   an   input   box   not   related   to   the  
event.  

ñ Events  are  generally  treated  as  small  procedural  modules.  Ideally  multiple  (local)  functions  
are   allowed   in   events,   but   some   systems   don't   allow   multiple   routines   per   event   unless  
calling   shared   libraries.   Generally,   a   language   like   Pascal   that   allows   nested   functions  
simplifies  scoping  issues.  

ñ Some   rules   usually   need   to   be   set   about   which   event   has   priority   if   multiple   events   are  
triggered   at   the   same   time.   For   example,   both   a   form   and   a   widget   may   have   an  
"onKeystroke"  event.  Sometimes  priorities  can  be  set  on  individual  events,  other  times  the  
framework  lays  out  the  rules  and  cannot  be  directly  changed.  

ñ Generally  all  pending  events  are  processed  before  user  input  is  sampled  for  the  next  set  of  
events.   Any   user   input   happening   during   event   processing   is   put   in   a   queue.   When   all  

sarojpandey.com.np     6  of  90  


Visual Programming Language and .NET BIT III

pending   events   are   processed,   the   input   queue   is   then   again   processed,   resulting   in   perhaps  
further  events.  (An  API  may  perhaps  allow  one  to  clear  the  input  buffer  from  an  event.)  

ñ Events   are   often   allowed   to   trigger   other   events   through   an   API.   One   must   be   careful   to  
avoid   infinite   looping   however   (see   below).   One   approach   is   to   simply   call   another   event.  
Another  is  to  put  "dummy"  user  input  actions  into  the  input  queue  that  will  trigger  events.  

ñ Similarly,  events  may  be  able  to  "cut  off"  any  pending  (queued)  events.  This  may  be  done  by  
setting  a  flag  on  the  return  structure/object,  or  returning  a  special  error  code.  

ñ A  mechanism  may  be  needed  to  prevent  recursive  or  continuous  looping  of  the  same  event  
or  event  set.  A  "max  recur"  setting  may  be  needed  per  event  or  for  the  event  engine.  When  
all  pending  events  are  processed,  the  counter  is  reset  to  zero.  

Typical  Inputs  and  Outputs  of  Events,  using  a  GUI  system  as  an  example:  

ñ Input  
ñ Keyboard  key  last  pressed  
ñ Mouse  coordinates  
ñ Line  number  and/or  token  value  (for  event-­‐driven  parsers)  
ñ Other  prior  event  indicator/counter  (to  know  if  this  the  only  event  in  the  "stack")  

ñ Output  
ñ Event  success  or  failure  (such  as  validation  results)  
ñ Error  message(s)  (may  be  used  instead  of  status  indicator)  
ñ Refresh  indicator  (do  we  need  to  refresh  the  display?)  
ñ Attributes  specific  to  event  kind  or  triggering  object  (noun).  
ñ Logging  or  debug  info  (to  be  written  to  optional  log)  

Typical  Event  Attributes  (usually  set  during  design  time)  

ñ Noun  associated  with  event  (such  as  widget  ID)  


ñ Value  of  the  widget  at  time  of  event  (if  applicable)  
ñ Action  associated  with  event  ("onClick",  "onValidate",  "onClose",  etc.)  
ñ Priority  of  event  in  case  multiple  events  in  event  stack  
ñ Maximum   occurrences   per   user   action   to   prevent   infinite   trigger   loops.   (May   also   be   system-­‐wide  
instead  of  per  event.)  

sarojpandey.com.np     7  of  90  


Visual Programming Language and .NET BIT III

2.  MODELS  OF  INTERFACE  DESIGN  

People   in   the   computer   industry   frequently   toss   around   the   term   “Computer   Literacy”.   They   talk  
about   how   some   people   have   it   and   some   don't;   about   how   those   who   have   it   will   succeed   in   the  
information   age   and   those   and   those   who   lack   it   will   fall   between   the   social   and   economic   cracks   of  
the  new  age.  But  computer  literacy  is  nothing  more  than  a  expression  for  making  the  user  stretch  to  
reach  an  information  age  appliance  rather  than  having  the  appliance  stretch  to  meet  the  user.  
 
The  Three  Models  
1. The  Implementation  Model  
2. The  Mental  Model  or  The  Conceptual  Model  
3. The  Manifest  Model  
 
The  Implementation  Model  
ñ The  model,  which  represents  the  actual  working  of  a  device.  
ñ It  reflects  the  technology.  
ñ A   motion   picture   projector   uses   a   complicated   sequence   of   intricately   moving   parts   to  
create  its  illusion.  It  shines  a  very  bright  light  through  a  translucent,  miniature  image  for  a  
fraction   of   a   second.   It   then   blocks   out   the   light   for   a   split   second   while   it   moves   another  
miniature   image   into   a   place   then   it   unblocks   the   light   again   for   another   moment.   It   repeats  
this  process  with  a  new  image  24  times  per  second.  The  actual  method  of  working  of  device  
is  Implementation  Model.      
 
The  Mental  Model  
ñ It  reflects  user’s  vision.  
ñ People  do  not  need  to  know  all  of  the  details  of  how  some  complex  process  works  in  order  
to  use  it.  
ñ It   is   impossible   for   the   users   to   visualize   the   complexity   of   implementation   of   computer  
software  and  see  the  connections  between  his  actions  and  the  program’s  reaction.  
ñ In  the  picture  projector  example  it  is  easy  to  forget  the  nuance  of  sprocket  holes  and  light-­‐
interrupters   while   watching   an   absorbing   drama.   The   viewer   imagines   that   the   projector  
merely   throws   onto   the   big   screen   a   picture   that   moves.   The   user   don’t   need   to   understand  
the   details   of   the   working   of   that   projector.   They   just   enjoy   the   show.   This   model   is  
conceptual  or  mental  model.  
 
sarojpandey.com.np     8  of  90  
Visual Programming Language and .NET BIT III

People   don't   need   to   know   all   of   the   details   of   how   some   complex   process   actually   works   in  
order  to  use  it,  they  create  mental  shorthand  for  explaining  it,  one  that  is  powerful  enough  to  
cover  all  instances,  but  that  is  simple  and  easy.  For  example,  man  people  imagine  that  when  
they  plug  their  vacuums  and  blenders  in  to  the  outlets  in  the  wall,  electricity  travels  up  to  them  
through   little   black   tubes.   The   mental   model   is   perfectly   adequate   for   using   all   household  
electricity   involves   nothing   actually   traveling   up   the   cord   or   that   there   is   reversal   of   electrical  
potential   120   times   per   second   is   irrelevant   to   user,   although   the   power   company   needs   to  
know  these  details.  
 
In   the   digital   world,   however,   the   differences   between   a   user's   mental   model   and   an   actual  
implementation  model  may  be  stretched  far  apart.  We  ignore  the  fact  that  a  cellular  telephone  
might  swap  connections  between  a  dozen  different  cell  antennas  in  the  course  of  a  two-­‐minute  
phone   call.   Knowing   this   doesn't   help   us   to   understand   how   to   work   our   car   phones.   This   is  
particularly  true  for  computer  software,  where  the  complexity  of  implementation  can  make  it  
nearly   impossible   for   the   user   to   see   connection   between   his   action   and   program's   reaction.  
When   we   use   the   computer   to   digitally   edit   sound   or   display   video   effects   like   morphing,   we  
are  bereft  of  analogy  to  the  mechanical  world,  so  our  mental  models  are  necessarily  different  
from   the   implementation   model.   Even   if   connections   were   visible,   they   would   remain  
inscrutable.  
 
The  Manifest  Model  
ñ Computer  software  has  a  behavioral  face  it  shows  to  the  world,  one,  which  is  made  by  the  
programmer  or  a  designer.  
ñ This   outside   look   is   not   the   accurate   representation   of   what   is   really   going   inside   the  
computer.   This   ability   to   represent   the   computer’s   functioning   independent   of   its   actions,  
allows  the  clever  designer  to  hide  the  complexity  of  the  software.  
ñ Manifest  Model  is  the  way  in  which  the  program  represents  its  functioning  to  the  user.  
ñ Considerations   of   efficiency   and   technology   strongly   affect   software   developer’s   decisions  
in  their  choice  of  the  manifest  model.  
ñ Designers,  on  the  other  hand,  are  more  neutral  in  their  choice.  
ñ The   closer   our   manifest   model   comes   to   the   user’s   mental   model,   the   easier   he   will   find   the  
program   to   use   and   understand.   Offering   a   manifest   model   that   closely   follows   the  
implementation  model  will  reduce  the  user's  ability  to  use  and  learn  the  program.  

sarojpandey.com.np     9  of  90  


Visual Programming Language and .NET BIT III

ñ In  the  world  of  software,  a  program's  manifest  model  can  be  quite  divergent  from  the  actual  
processing  structure  of  the  program.  For  example,  an  operating  system  can  make  a  network  
file   server   look   as   though   it   were   a   local   disk.   The   fact   that   physical   disk   drive   may   be   miles  
away   is   not   made   manifest   by   the   model.   This   concept   of   the   manifest   model   has   no  
counterpart  in  the  mechanical  world.  
 
 
 
 
 
 
 
 
We  tend  to  form  mental  models  that  are  simpler  then  reality,  so  creating  manifest  model  that  
are   simpler   than   the   actual   implementation   model   can   help   the   user   achieve   a   better  
understanding.  Pressing  the  brake  pedal  in  your  car,  for  example,  may  conjure  a  mental  image  
of  pushing  a  lever  that  rubs  against  the  wheels  to  slow  down  The  actual  mechanism  includes  
hydraulic  cylinders,  tubing  the  mental  pads  that  squeeze  on  a  perforated  disk,  but  we  simplify  
all  of  that  in  our  minds,  creating  a  more  effective,  albeit  less  accurate,  mental  model.    
 
In  software,  we  imagine  that  a  spreadsheet  'scrolls'  new  cells  into  view  when  we  click  on  the  
scrollbar.  Nothing  of  the  sort  actually  happens.  There  is  no  sheet  of  cells  out  there,  but  a  tightly  
packed  heap  of  cells  with  various  pointers  between  them,  and  the  program  synthesizes  a  new  
image  from  them  to  display  in  real  time.  
 
Most  software  conforms  to  implementation  models  
ñ Often  designed  by  engineers  who  know  exactly  how  the  software  works,  the  result  is  
software  with  a  manifest  model  is  very  consistent  with  its  implementation  model.  
ñ This  is  Logical  and  Truthful  but  may  not  be  effective.  
ñ The  user  doesn't  care  all  that  much  about  how  a  program  is  actually  implemented  but  they  
care   about   any   problems   that   arise   because   of   the   difference   between   the   models,   but   the  
difference  itself  is  of  no  particular  interest.  

sarojpandey.com.np     10  of  90  


Visual Programming Language and .NET BIT III

ñ There   is   communication   gap   between   technical   people   who   understand   implementation  


models   and   non-­‐technical   peoples   who   think   purely   in   terms   of   mental   model.   Any   time  
user  calls  a  software  company's  hotline.  
ñ Understanding   how   software   actually   works   will   always   help   someone   to   use   it,   but   this  
understanding   usually   comes   at   a   significant   cost.   The   manifest   model   allows   software  
creators  to  solve  the  problem  by  simplifying  the  apparent  way  the  software  works.  The  cost  
is   entirely   internal,   and   the   user   never   has   to   know.   User   interfaces   that   abandon  
implementation  models  to  follow  mental  models  more  closely  are  better.  
ñ E.g.   Photoshop   –   color   balance   where   a   small   dialog   box   shows   series   of   small,   sample  
images,  each  with  a  different  color  balance  instead  of  offering  any  numerical  settings.  The  
user   can   simply   click   on   the   image   that   best   represents   the   desired   color   setting.   Because  
the  user  is  thinking  in  terms  of  colors  not  in  terms  of  numbers,  this  dialog  box  follows  the  
mental  model.  
User  interfaces  that  conform  to  implementation  models  are  bad...  
 
Mathematical  Thinking  
ñ Just   because   a   certain   tool   is   well   suited   to   attacking   a   problem   in   software   construction  
doesn't   necessarily   mean   that   it   is   well   suited   as   a   mental   model   for   the   user.   E.g.   Just  
because  your  house  is  constructed  of  two-­‐by-­‐four  studs  and  sixteen-­‐penny  nails,  it  doesn't  
mean  that  you  should  have  to  be  skilled  with  a  hammer  to  live  there.  
ñ Most  of  the  data  structures  and  algorithms  used  to  represent  and  manipulate  information  in  
software  are  logic  tools  based  on  mathematical  models.  All  programmers  are  fluent  in  these  
models,  including  such  things  as  recursion,  hierarchical  data  structures  and  multi  threading.  
The   problem   arises   when   the   user   interface   manifests   the   concepts   of   recursion,  
hierarchical  data  or  multi  threading.  
ñ Mathematical   Thinking   is   an   implementation   model   trap   that   a   programmer   may   fall   into.  
They   solve   programming   problems   by   thinking   mathematically,   for   example,   in   terms   of  
data  structures  and  algorithms.  
ñ E.g.  Boolean  algebra  (AND,  OR,  NOT),  it  is  compact  mathematical  system  that  conveniently  
describes   the   behavior   of   the   strictly   on-­‐or-­‐off   universe   that   exists   inside   the   digital  
computer.   Lets   consider   two   operations:   AND   and   OR.   The   problem   is   english   language   also  
has  an  'and'  and  an  'or',  and  they  are  usually  interpreted  by  non  programmers  as  the  exact  
opposite   of   the   Boolean   AND   and   OR.   If   the   program   expresses   itself   with   boolean   notation,  
the  user  can  be  expected  to  misinterpret  it.  

sarojpandey.com.np     11  of  90  


Visual Programming Language and .NET BIT III

ñ In   Database   query,   if   I   want   to   retrieve   the   names   of   all   the   employees   who   live   in  
Kathmandu  and  Lalitpur,  in  english  I  simply  say  “get  employees  in  Kathmandu  and  Lalitpur”  
and  in  Boolean  algebraic  term  I  would  say  “get  employees  in  Kathmandu  OR  Lalitpur”.  No  
employees   lives   in   two   cities   at   once   so   saying   “get   employees   in   Kathmandu   AND   Lalitpur”  
is  nonsensical  in  Boolean  and  will  always  return  the  empty  set  as  an  answer.  
 
Mechanical  Age  Models  and  Information  Age  Models  
We   are   experiencing   an   incredible   transformation   from   a   mechanical   age   to   a   information  
age.   The   change   has   only   begun,   and   the   pace   is   accelerating   rapidly.   The   upheaval   that  
society  underwent  as  a  result  of  industrialization  will  be  dwarfed  by  that  associated  with  the  
information  age.  
 
It   is   only   natural   for   us   to   drag   the  imagery   and   taxonomy   of   the   earlier   era   into   the   new   one.  
As  the  history  of  the  Industrial  Revolution  shows,  the  fruits  of  new  technology  can  often  only  be  
expressed  at  first  with  the  language  of  an  earlier  technology.  
 
ñ Mechanical  Age  Models  –  importing  linguistic  or  mental  images  directly  from  the  pre-­‐
digital  world.  
ñ For   example:   when   we   translate   the   process   of   typewriting   with   a   typewriter   into  
word   processing   on   a   computer,   we   are   doing   mechanical-­‐age   modeling   to   common  
task.   Typewriter   used   little   metal   tabs   to   slew   the   carriage   rapidly   over   several   spaces  
and   come   to   rest   on   a   particular   column.   The   process   as   a   natural   outgrowth   of   the  
technology  was  called  tabbing  or  setting  tabs.  Word  processors  also  have  tabs  because  
their  function  is  the  same.  

ñ When   technology   changes   dramatically,   the   nature   of   the   task   generates   the   Information  
Age  Model.  
ñ Sometimes  the  mechanism  age  model  can't  make  the  cut  into  the  digital  world.  We  
don't  use  reins  to  steer  the  cars,  or  even  a  tiller,  although  both  of  these  older  models  
were   tried   in   the   early   days   of   autos.   It   took   Many   years   to   develop   an   idiom   that  
was  unique  to  and  appropriate  for  the  car.  
ñ These   are   the   tasks,   processes   or   concepts   that   arise   solely   because   the   new  
technology   makes   them   possible   for   the   first   time.   With   no   reason   to   exist   in   non-­‐
digital  version,  they  were  not  conceived  of  in  advance.  When  the  telephone  was  first  

sarojpandey.com.np     12  of  90  


Visual Programming Language and .NET BIT III

invented,  for  example,  it  was  touted  solely  as  a  business  tool.  Its  use  as  a  personal  
tool   wasn't   conceived   of   until   it   had   been   in   use   for   40   years.   Today,   the   phone   is  
used  at  least  as  much  for  personal  reasons  as  it  is  for  business.    
 
ñ New  conceptual  models  are  not  exclusive  to  the  digital  world;  they  are  part  of  any  rapidly  
shifting  context  and  technology  is  our  current  context  and  technology  is  our  current  context.  
Digital  technology  is  the  most  rapidly  shifting  context  humankind  has  witnessed  so  far,  so  
new  and  surprising  information  age  models  are  and  will  be  plentiful.  
 
ñ When  designers  rely  on  mechanical  age  paradigms  to  guide  them,  they  are  blinded  to  the  far  
greater   potential   of   the   computer   to   do   information   management   tasks   in   a   better   and  
different  way.  
   
Modeling  from  user's  point  of  View.  

1. Goal  Directed  Design  


2. Software  Design  
3. Models  of  Interface  Design  
4. Visual  Interface  Design  
Designing  for  the  users  –  to  focus  our  attention  on  the  goals  towards  which  the  users  strive.  
 
1.  Goal  Directed  Design  
o Basis  of  UID  –  to  achieve  the  user  goals  
o Most  of  today’s  software  emerged  from  mostly  “Development”  rather  than  “Research”.    
o I.e.  designed  from  the  point  of  view  of    
o A  programmer  –  based  on  technology  and  programming  methods,  or    
o The  marketing  department  –  based  on  market  potential  
o User  –  based  on  their  everyday  tasks.  
 
o The  user’s  goals  –    
o To  create  successful,  effective  software,  we  must  see  that  it  achieves  the  users’  goals.  
o Focusing  on  the  user  and  his  goals  rather  than  technology  and  tasks.  
o User’s  tasks  vs.  User’s  goals  
o Some  examples  –    
§ S/w  that  are  rude  –  software  assumes  that  its  user  is  computer  literate  
§ S/w  that  is  obscure    
sarojpandey.com.np     13  of  90  
Visual Programming Language and .NET BIT III

§ S/w  with  inappropriate  behavior  


 
o The  essence  of  user  interface  design  –  
o “Don’t  make  the  user  look  stupid”  
o The  only  true  test  of  the  quality  of  a  user  interface  is  in  its  context.  
o “A  good  design  makes  the  user  more  effective.”  The  goal  of  all  software  users  is  to  be  
more  effective.  
o Features  
o Programmers  tend  to  think  in  terms  of  functions  and  features.  Users  do  not  usually  
work  that  way,  step  by  step.  
 

Goal-­‐Directed  Design  is  a  powerful  tool  for  answering  the  most  important  questions  that  crop  
up  during  the  design  phase:  
1. What  should  be  the  form  of  the  program?  
2. How  will  the  user  interact  with  the  program?  
3. How  can  the  program’s  functions  be  most  effectively  organized?  
4. How  can  the  program  deal  with  problems?  
5. How  will  the  program  introduce  itself  to  first-­‐time  users?  
 

2.  Software  Design  
Software  isn’t  designed  
ñ Software,   although   it   is   a   complex   object,   is   rarely   designed   while   mechanical   objects   are  
carefully  designed  and  engineered.  
ñ Leaving  the  Mechanical  Age  and  entering  the  Information  Age.  
ñ Software  complexity:  thousands  of  lines  of  code.  
ñ Professionals  design  most  objects  of  mechanical  age.  E.g.  Automobile  engineers,  architects.  
ñ The  consumer  market  won’t  tolerate  lack  of  order.  E.g.  Intel  Pentium  bug  scandal.  
ñ Software  industry  has  to  regulate  itself.  
 
Conflict  of  interest  
ñ Happening  in  the  world  of  software  development  because  the  people  who  build  it  are  also  
the  people  who  design  it.  
ñ Software  tools  for  designing  should  be  used,  there  is  a  danger  in  using  programming  tools.  
ñ Prototyping  –  useful  for  design  verification.  

sarojpandey.com.np     14  of  90  


Visual Programming Language and .NET BIT III

The  profession  of  software  design  


ñ There  is  a  growing  awareness  of  this  conflict  of  interest  in  the  software  industry.  
ñ Who  designs  the  software?  –  Software  architects,  software  engineers,  software  designers  
ñ Software   design   –   the   software   development   phase   that   is   responsible   for   determining   how  
the  program  will  achieve  the  user’s  goals.  
ñ The  questions  answered  by  this  phase  include:  
1. What  the  software  will  do?  
2. What  it  will  look  like?  
3. How  it  will  communicate  with  the  user?    
ñ User  Interface  design  –  a  subset  of  software  design  that  covers  no.  2  and  3.  
 
Supporting  software  design  disciplines  
  -­‐  Usability  professionals  
o Specialize  in  the  study  of  how  people  interact  with  the  software.  
o Conduct  interviews  with  users,  observe  them  using  software,  then  evaluate  the  
quality  of  user  interfaces  and  make  recommendations.  
  -­‐  “Human  factors  engineering”,  “human-­‐computer  interaction”  or  “ergonomics”  
o Researches  the  behavior  of  people  as  they  interact  with  computers.  
  -­‐  Cognitive  psychology  
o Looks  at  how  people  think  and  understand  the  world  around  them,  particularly  the  
technical  objects  they  work  with.  
 
3.  Visual  Interface  Design  
Although   it   is   generally   accept   that   GUI's   are   better   than   character-­‐based   interfaces,   some  
GUI  system  fail  to  give  the  users  the  simplicity  and  the  ease  of  use  that  it  promises.  It  is  not  
just  the  graphical  nature  of  the  interface  that  makes  it  better.    
 
A  good  user  interface  must  be  user-­‐centric  and  not  technology-­‐centric.  "Graphicalness"  is  
technology-­‐centric  concept.  To  make  the  interface  user-­‐centric,  we  need  to  take  care  of  the  
"Visualness"  of  the  software  and  the  program's  vocabulary.  
Humans  process  information  better  visually  than  they  do  textually.  But  the  issue  here  isn't  
the   graphical   nature   of   the   program,   but   it   is   the   visualness   of   the   interaction.   Visual  
Interface   Design   ensures   that   the   users   would   be   able   to   carry   out   their   tasks   smoothly   and  
effortlessly  towards  their  goals.  

sarojpandey.com.np     15  of  90  


Visual Programming Language and .NET BIT III

4.  Visual  Processing  
We   can   say   that   the   human   brain   is   a   tremendous   pattern-­‐processing   computer.   It   manages  
the  vast  quantity  of  information  that  our  eye  gathers,  by  unconsciously  making  patterns  and  
thus   reducing   the   visual   complexity.   The   ability   of   our   unconscious   mind   to   group   things  
into   patterns   based   on   visual   clues   is   what   allows   us   to   process   visual   information   so  
quickly  and  efficiently.  
 

Thus   creating   a   visual   interface   design   should   incorporate   this   eye-­‐brain-­‐pattern   process.  
We   should   be   able   to   present   the   program's   components   as   recognizable   visual   patterns  
with  accompanying  text  as  a  descriptive  supplement.  
 

In  visual  interface  design,  symbols  represent  the  components  or  the  objects  in  the  interface.  
These   symbols   should   be   used   everywhere   the   object   is   represented   on   the   screen.   What  
this   does   is   that   it   teaches   the   unconscious   mind,   the   connection   between   the   symbol   and  
the  object.  This  is  called  "visual  fugue".  
 

The  vocabulary  
The   success   of   the   first   GUI   systems   was   the   result   of   restricting   the   range   of   vocabulary   of   the  
software   for   communicating   with   the   user.   In   a   command-­‐line   interface,   the   user   could   enter   any  
combination  of  characters.  For  a  correct  entry,  he  needed  to  know  what  the  program  expected,  i.e.  
the  exact  sequence  of  letters  and  symbols.  
In  GUI,  the  entire  vocabulary  is  restricted  to  just  a  click,  double-­‐click  or  a  click-­‐and-­‐drag.  As  a  result,  
the  learning  process  became  easier  and  less  time-­‐consuming.  
 

The  Canonical  Vocabulary  -­‐  simple  but  significant  form  of  vocabulary.  

sarojpandey.com.np     16  of  90  


Visual Programming Language and .NET BIT III

i.   Primitives   -­‐   comprises   of   atomic   elements   or   the   indivisible   elements.   E.g.   pointing,  
clicking,  dragging  
 
ii.   Compounds   -­‐   comprises   of   a   more   complex   constructs   created   by   combining   one   or  
more  elements.  E.g.  double  clicking,  click-­‐and-­‐drag.  
 
iii.  Idioms   -­‐   these   combine   compounds   with   domain   knowledge.   Domain   knowledge   is  
information  related  to  the  user's  application  area.  

sarojpandey.com.np     17  of  90  


Visual Programming Language and .NET BIT III

3.  THE  FORM  
Interface  Paradigms  
User  interface  design  begins  well  below  the  surface  of  systems  and  applications.  Imagining  that  we  
can   create   a   good   user   interface   for   our   programs   after   the   program's   internals   have   been  
constructed  is  like  saying  that  a  good  coat  of  paint  will  turn  a  cave  into  a  mansion.  
 
Software   designers   must   fully   understand   why   our   computers   work   the   way   they   do.   They   must  
make  informed   judgments   about  what  to  keep  because  its  good  and  what  to  discard  even  though  it  
is   familiar.   But   getting   intimate   with   the   techniques   of   software   development   is   a   seduction   that  
designer  must  resist.  It  is  all  too  easy  to  become  sympathetic  to  the  needs  of  the  computer,  which  
are  almost  always  in  direct  opposition  to  the  needs  of  the  user.  
 

There  is  nothing  in  the  world  of  software  development  that  is  quite  as  frightening  as  an  empty  screen.  
When  we  begin  designing  the  user  interface,  we  must  first  confront  that  awful  whiteness,  and  ask  
ourselves:  What  does  good  software  look  like?  
 

The  myths  of  Metaphor  


Software   designers   often   speak   of   finding   the   right   metaphor   upon   which   to   base   their   interface  
design.  They  imagine  that  filling  their  interface  with  images  of  familiar  objects  from  the  real  world  
will   give   their   users   a   pipeline   to   easy   learning.   So   they   create   an   interface   masquerading   as   an  
office  filled  with  desks,  file  cabinets,  phones  and  address  books  etc.  
 

Searching  for  the  magic  metaphor  is  one  of  the  biggest  mistake  developers  make  during  the  interface  design.  
Searching  for  the  elusive  guiding  metaphor  is  like  searching  for  the  correct  stem  engine  to  power  the  airplane  
or  searching  for  a  good  dinosaur  on  which  to  ride  to  work.  Basing  a  user  interface  design  on  a  metaphor  is  not  
only   un-­‐helpful;   it   can   often   be   quite   harmful.   The   idea   that   good   user   interface   design   relies   on   the  
metaphors  is  one  of  the  insidious  of  the  many  myths  that  permeate  the  software  community.  
 
Metaphors  may  be  useful  in  learnability  to  first  time  users  but  it  takes  tremendous  cost.  By  representing  old  
technologies,   most   metaphors   firmly   nail   the   conceptual   feet   to   the   ground,   which   limits   the   power   of   the  
software.  
 

Three  interface  paradigms  


ñ Technology  Paradigm  
-­‐  This  is  based  on  understanding,  how  things  work?  So,  its  difficult  proposition.  
 

sarojpandey.com.np     18  of  90  


Visual Programming Language and .NET BIT III

ñ Metaphor  Paradigm  
    -­‐  This  is  based  on  intuiting  how  things  work?  So,  its  risky  method.  
ñ Idiomatic  Paradigm  
-­‐   This   is   based   on   learning   how   to   accomplish   things   –   a   natural,   human   process.   It   is  
based   on   the   fact   that   the   human   mind   is   as   incredibly   powerful   learning   machine  
and  that  learning  is  not  hard.  
 
The  Technology  Paradigm  
The   technology   paradigm   of   user   interface   design   is   simple   and   incredibly   widespread   in  
the   computer   industry.   The   technology   paradigm   merely   means   that   the   interface   is  
expressed  in  terms  of  its  construction,  of  how  it  was  built.  In  order  to  use  it,  the  user  must  
understand  how  the  software  works.  With  the  technology  paradigm  user  interface  is  based  
exclusively  on  the  implementation  model.  
 
The   majority   of   software   programs   today   are   Metabolic   in   that   they   show   us,   they   don’t  
clearly   show   us   how   they   are   built.   There   is   one   button   per   function,   one   function   per  
module   of   code,   and   the   commands   and   processes   precisely   echo   the   internal   data  
structures  &  algorithms.  
 
The  Metaphor  Paradigm  
The   modern   GUI   was   invented   at   Xerox   Palo   Alto   Research   Center   (PARC)   in   1970.   The   GUI   as  
defines   by   PARC   consisted   of   many   things   including   windows,   buttons,   mice,   icons   metaphors,  
menus   etc.   Some   of   these   components   are   good   and   some   are   not   but   they   have   achieved   holy  
status  in  the  software  industry  by  association  with  the  empirical  superiority  of  the   group.  The  idea  
that  metaphors  are  the  firm  foundation  for  user  interface  design  is  a  very  misleading  proposition.  
It's  like  worshipping  5.25”  floppy  diskettes  because  so  much  good  software  once  came  on  them.  J  
 
  The  first  successful  GUI  of  PARC  was  Apple  Macintosh  with  its  desktop,  wastebasket,  
  overlapping  sheets  of  papers  and  folders.  
 
The   mac   didn't   succeed   because   of   these   metaphors,   but   because   it   was   the   first   computer   that  
defined   a   tightly   restricted   vocabulary   –   a   canonical   vocabulary   based   on   a   very   small   set   of   mouse  
interaction.  The  metaphors  were  just  paintings  on  the  walls  of  a  well-­‐designed  house.  
 

sarojpandey.com.np     19  of  90  


Visual Programming Language and .NET BIT III

Metaphors   don't   scale   very   well.   A   metaphor   that   works   well   for   a   simple   process   in   a   simple  
program   will   often   fail   to   work   well   as   that   process   grows   in   size   or   complexity.   File   icons   were  
good   idea   when   computers   had   floppies   or   10   MB   hard   disks   with   only   a   couple   of   hundred   files,  
but   in   these   days   of   multi-­‐gigabyte   hard   disk   and   thousands   of   files,   file   icons   can   get   pretty  
awkward.  
 
A  metaphor  in  context  of  the  user  interface  means  visual  metaphors:  a  picture  of  something  used  to  
represent   that   thing.   Users   recognize   the   imagery   of   metaphor   and   by   extension,   can   understand  
the   purpose   of   the   thing.   Metaphors   range   from   the   tiny   images   in   toolbar   buttons   to   the   entire  
screen   on   a   program.   For   example,   they   can   be   a   tiny   scissors   on   a   button   indicating   'Cut'.   We  
understand  metaphors  intuitively.  Intuition  works  by  inference,  where  we  see  connections  between  
disparate  subjects  and  learn  from  these  similarities  while  not  being  distracted  by  their  differences.  
 
Metaphors  rely  on  associations  perceived  in  similar  ways  by  both  the  designer  and  the  user.  If  the  
user   doesn't   have   the   same   cultural   background   as   the   designer,   the   metaphor   fails.   Even   in   the  
same  or  similar  cultures  there  can  be  significant  misunderstanding.  
   
Does  a  picture  of  an  airplane  mean,  “send  via  airmail”  or  “make  airlines  reservation”     or  
anything  other?  
The  metaphor  paradigm  relies  on  intuitive  connection  in  which  there  is  no  need  to  understand  the  
mechanics  of  the  software,  so  it  is  a  step  forward  from  the  technology  paradigm,  but  its  power  and  
usefulness  has  been  inflated  to  unrealistic  proportions.  
 
It  is  silly  to  imagine  that  good  user  interface  can  be  based  on  a  kind  of  mental  logic.  In  user  interface  
design   community   the   word   intuitive   is   widely   used   to   mean   easy-­‐to-­‐use   or   easy-­‐to-­‐understand.  
There   are   certain   sounds,   smells   and   images   that   make   us   respond   without   any   previous   conscious  
learning.  When  a  small  child  encounters  an  angry  dog,  s/he  instinctively  knows  that  bared  fangs  are  
sign   of   great   danger   even   without   any   previous   learning.   The   encoding   for   such   recognition   goes  
deep.  Instinct  is  a  hard  wired  response  that  involves  no  concise  thought.  
 
Example   of   extinct   in   human   computer   interaction   include   the   way   we   are   startled   and   made  
apprehensive   by   gross   changes   in   the   image   on   the   screen,   or   react   to   sudden   noises   from   the  
computer  or  the  smell  of  smoke  rising  from  the  CPU.  
 

sarojpandey.com.np     20  of  90  


Visual Programming Language and .NET BIT III

Intuition  is  middle  ground  between  having  consciously  learned  something  and  knowing  something  
instinctively.   If   we   have   learned   that   things   glowing   red   can   burn   us,   we   tend   to   classify   all   red-­‐
glowing  things  as  potentially  dangerous  until  proven  otherwise.  We  don't  necessarily  know  that  the  
particular  red  glowing  thing  is  a  danger,  but  it  gives  us  a  safer  place  to  begin  our  exploration.  We  
commonly   refer   intuition   as   a   mental   comparison   between   something   and   the   things   we   have  
already  learned.  We  instantly  intuit  how  to  work  a  trash  icon,  for  example,  because  we  once  learned  
how  a  real  trash  works,  there  by  connection  is  made.  
 
The  idiomatic  paradigm    
This   method   of   interface   design   solves   the   problems   of   both   of   the   previous   two.   It   is   called  
idiomatic   paradigm   because   it   based   on   the   way   we   learn   and   use   idioms,   figures   of   speech,   like  
“beat  around  the  bush”  or  “Cool”.  
 
These  idiomatic  expressions  are  easily  understood  but  not  in  the  same  way  as  metaphors.  There  is  
no  bush  and  nobody  is  beating  anything.  We  understand  the  idiom  supply  because  we  have  learned  
it   and   because   it   is   distinctive,   not   because   we   understand   it   or   because   it   makes   subliminal  
connection  in  our  minds.  That’s  why  human  mind  is  really  outstanding:  learning  and  remembering  
idioms   very   easily   without   relying   on   comparisons   to   know   situations   to   understanding   how   it  
works.  
 
Most   of   the   elements   of   GUI   interface   are   idioms.   Windows,   Caption   Bars,   Close   Box,   Screen  
Splitters  and  drop  down  menus  are  things  we  learn  idiomatically  rather  than  intuit  metaphorically.  
 
Learning   is   hard   because   of   our   conditioning   from   the   technology   paradigm.   Those   old   interfaces  
were  very  hard  to  learn  because  we  also  had  to  understand  how  they  worked.  Most  of  the  times  we  
learn   without   understanding:   things   like   faced,   social   interactions,   attitudes,   the   arrangements   of  
rooms  and  furniture  of  house  and  offices.  We  don't  “understand”  why  someone's  face  is  composed  
the  way  it  is,  but  we  know  that  face.  We  recognize  it  because  we  have  looked  at  it  and  automatically  
memorize  it.    
The   familiar   mouse   is   not   metaphoric   of   anything,   but   rather   is   learned   idiomatically.   There   is   a  
scene   in   the   movie   Star   Trek   IV   where   Scotty   returns   to   twentieth-­‐century   earth   and   tries   to   speak  
into  a  mouse.  There  is  nothing  about  physical  appearance  of  the  mouse  that  indicates  its  purpose  or  
use,  nor  it  is  comparable  to  anything  else  in  our  experience,  so  learning  is  not  intuitive.  However,  
learning   to   point   at   things   with   a   mouse   is   incredibly   easy.   Someone   probably   spent   all   three  

sarojpandey.com.np     21  of  90  


Visual Programming Language and .NET BIT III

seconds   showing   it   to   you   first   time,   and   you   mastered   it   from   the   instant   on.   We   don't   know   or  
care   how   nice   work,   and   yet   even   small   children   can   operate   them   just   fine.   That   is   idiomatic  
learning.  
 
Our  language  is  filled  with  idioms  that  if  we  haven't  been  taught  them,  make  no  sense.  If  I  say,  “Sunil  
kicked  the  bucket”,  you  know  what  I  mean  even  though  there  is  no  bucket  or  kicking  involved.  You  
can't   know   this   because   you   have   thought   through   the   various   permutations   of  smacking  pails  with  
your  feet;  you  can  learn  this  from  context  in  something  you  read  or  by  being  consciously  taught  it.  
You  remember  this  obscure  connection  between  bucket  and  kicking   only  because  humans  are  good  
at  remembering  stuffs.  
All  idioms  must  be  learned.  Good  idioms  only  need  to  be  learned  once.  
 
Although  the  idioms  must  be  learned,  good  idioms  only  need  to  be  learned  once.  It  is  quite  easy  to  
learn  idioms  like  “Neat”  or  “Correct”  or  “The  light  are  on  but  nobody  is  in  home”  or  “red  eye”  etc.  
Human   minds   are   capable   of   learning   these   with   single   hearing.   Similarly,   check   boxes,   radio  
buttons,   push   buttons,   close-­‐boxes,   pull   down   menus,   icons,   tabs,   combo   boxes,   keyboard,   mice   and  
pens.  
 
Branding  
Marketing   professional   know   the   idea   of   branding   by   taking   a   simple  action   or   symbol   and   filling   it  
with   meaning.   After   all,   synthesizing   idioms   is   the   essence   of   product   branding,   where   by   a  
company   takes   a   product   branding,   where   by   a   company   takes   a   product   or   company   name   fills  
with   a   desired   meaning.   With   branding,   meaningless   word,   a   meaningless   idiom   can   also   be  
associated  with  some  meanings.  Idioms  are  visual  too.  The  golden  arches  of  McDonalds,  the  three  
diamonds  of  Mitsubishi,  the  five  interlocking  rings  of  the  Olympics,  even  Microsoft's  flying  window  
are  non-­‐metaphoric  idioms  that  are  instantly  recognizable  and  filled  with  common  meaning.  
 
Iconically,   many   of   the   familiar   GUI   elements   that   are   often   thought   of   as   metaphoric   are   actually  
idiomatic.   Artifacts   like   window   close-­‐boxes,   resizable   windows,   infinitely   nested   file   folders   and  
clicking  and  dragging  are  non-­‐metaphoric  operations-­‐they  have  no  parallel  in  the  real  world.  They  
derive  their  strength  only  from  their  easy  idiomatic  learnability.  
'Metaphors  are  hard  to  find  and  they  constrict  our  thinking.'  
 
 

sarojpandey.com.np     22  of  90  


Visual Programming Language and .NET BIT III

Affordance  
The  term  affordance  is  defined  as  the  perceived  and  actual  properties  of  the  thing,  primarily  those  
fundamental  properties  that  determine  just  how  the  thing  could  possibly  be  used.  
-­‐ Donald  Norman,  The  Psychology  of  Everyday  Things  
-­‐  
This  definition  is  fine  as  far  as  it  goes,  but  it  omits  the  key  connection:  how  do  we  know  what  those  
properties   offer   us?   If   we   look   at   something   and   understand   how   to   use   it   -­‐   you   comprehend   its  
affordance  –  we  must  use  some  method  for  making  the  mental  connection.  
 
Cooper   alter   the   Norman's   definition   by   omitting   the   term   “and   actual”.   After   the   omission  
affordance   purely   becomes   a   purely   cognitive   term,   referring   to   what   we   think   the   object   can   do  
rather  than  what  it  can  actually  do.  If  a  push  button  is  placed  on  the  wall  next  to  the  front  door  of  
residence,   its   affordances   are   100%   doorbell.   If,   when   we   push   it,   it   causes   a   trapdoor   to   open  
beneath   us   and   we   fall   into   it,   it   turns   out   that   it   wasn't   a   doorbell,   but   that   doesn't   change   its  
affordance  as  one.  
 
How   do   we   know   it's   a   doorbell?   Simply   because   we   have   learned   about   doorbells   and   door  
etiquette  and  push  buttons  from  our  complex  and  lengthy  socialization  and  maturation  process.  We  
all  have  learned  this  class  of  push-­‐able  things  be  exposure  to  electrical  and  electronic  devices  in  our  
environments.  
 
If   we   see   the   pushbuttons   in   unlikely   place,   suppose   in   the   hood   of   a   car,   we   cannot   imagine   its  
purpose   but   we   can   recognize   that   is   finger   push-­‐able   object.   But   how   we   recognize   this?   Do   we  
know  it  instinctively?  No!  Because  a  small  child  wouldn't  recognize  as  we  do.  We  recognize  it  as  a  
push-­‐able   thing   because   of   our   tool-­‐manipulating   nature.   We,   as   a   species   see   things   that   are   finger  
sized,   placed   at   finger   height,   and   we   automatically   push   them.   This   type   of   instinctive  
understanding  of  how  things  are  manipulated  with  hands  is  called  manual  affordance.  When  we  see  
things   that   are   long   and   round,   we   wrap   our   fingers   around   them   and   grasp   them   like   handles.  
When   things   are   clearly   shaped   to   fit   our   hands   or   feet,   we   recognize   that   they   are   directly  
manipulable  and  we  need  no  written  instructions.  
 
 We  pull  handle-­‐shaped  things  with  our  hands  and  if  they  are  small,  we  pull  them  with  our  fingers.  
We  push  flat  plates  with  our  hands  or  fingers.  If  they  are  on  the  floor  we  push  them  with  our  feet.  

sarojpandey.com.np     23  of  90  


Visual Programming Language and .NET BIT III

We  rotate  round  things,  using  our  fingers  for  small  things  like  dials  and  both  hands  on  larger  ones  
like  steering  wheels.  These  are  all  manual  affordance.  
 
What  is  missing  from  a  manual  affordance  is  any  idea  of  what  the  thing  really  does.  We  can  see  that  
it  looks  like  button,  but  how  do  we  know  what  it  will  accomplish  when  it  is  pressed?  For  that  we  
need   to   rely   on   text   and   pictures,   but   most   of   all   we   rely   on   previous   learning.   The   manual  
affordance  of  the  scroll  bar  clearly  shows  that  is  manipulable,  but  the  only  things  about  it  that  tells  
us   what   it   does   is   the   arrow,   which   hints   at   its   directionality.   In   order   to   know   that   a   scroll   bar  
controls   our   position   in   a   document,   we   have   to   either   be   taught   or   learn   by   ourselves   through  
experimentation.  
 
In  the  canonical  vocabulary,  manual  affordances  have  no  meaning  in  the  uppermost  tier,  in  idioms.  
This  is  why  gizmos  must  have  writing  on  them  to  make  sense.  If  the  answer  isn't  written  directly  on  
the   gizmos,   we   can   only   learn   what   it   does   by   one   of   two   methods:   experimentation   or   training.  
Either  we  try  it  to  see  what  happens  or  someone  who  has  already  tried  it  to  tell  us.  We  get  no  help  
form  our  instinct  or  intuition.  We  can  only  rely  on  the  empirical.  
 
The  Windows  
All  of  the  GUI  systems  are  built  on  windows.  There  are  two  kinds  of  windows  that  are  used.    
1.  The  main  windows    
2.  The  subordinate  windows  -­‐  like  dialoging  boxes,  documenting  windows  
 
Choosing   and   understanding   which   kind   of   windows   to   use   for   a   program   is   one   of   the   primary  
goals  in  designing  graphical  interfaces.    
 
Unnecessary  rooms  
An  analogy  between  rooms  and  windows:  
"Let  us  imagine  our  program  as  a  house  and  a  window  as  a  separate  room.  Then  the  house  
itself   would   be   the   program's   main   window   and   each   room   is   its   subordinate   windows.   Just  
as   we   don't   add   a   room   to   our   house   unless   it   has   a   purpose   that   cannot   be   served   by   other  
rooms,   we   shouldn't   add   windows   to   our   program   unless   it   has   a   special   purpose   that   can't  
be  served  by  existing  windows."  
 
Purpose   is   a   goal-­‐Directed   term.   It   implies   that   using   a   room   is   associated   with   a   goal,   but   not  

sarojpandey.com.np     24  of  90  


Visual Programming Language and .NET BIT III

necessarily  with  a  particular  task  or  function.  For  example,  Shaking  hands  at  front  door  have  quite  
different  goals  than  shaking  someone's  hand  in  the  kitchen  or  bedroom  or  any  other  places.  
Software  makers  should  always  think  of  reducing  the  number  of  windows  used  in  the  program.  One  
of   the   reasons   why   this   is   neglected   is   that   during   interface   design,   programmers   tend   to   think  
more  in  terms  of  functions.  That  is  why  more  often  he  associates  each  function  to  a  dialog  box  or  
other  subordinate  windows.  Putting  functions  in  a  dialog  box  emphasizes  their  separateness  from  
the  main  task.  So  if  a  function  is  an  integral  one,  the  programmer  should  try  to  integrate  it  into  the  
main  window.  
 
A  Dialog  box  is  another  room.  Have  a  good  reason  to  go  there.  
 
Necessary  rooms  
When   it   is   time   to   go   swimming,   you'll   think   it   odd   if   you   are   offered   the   crowed   living   room   to  
change  the  clothes.  You  need  separate  room  for  all  these  so  it  will  be  inappropriate  if  separate  room  
is  not  provided.  
 
If  the  software  needs  to  perform  a  function  that  is  out  of  the  normal  sequence  of  events,  it  should  
provide  a  separate  space  to  perform  it.  E.g.  adding  records  to  customer  information  or  editing  them  
might  be  a  normal  event,  but  deleting  the  customer  information  might  be  an  operation  which  need  
a  separate  dialog  box  for  confirmation.    
 
To   know   for   sure   when   we   require   separate   subordinate   windows,   we   must   clearly   examine   the  
user's  goals.  All  integral  functions  should  be  placed  in  the  main  window  while  for  some  it  is  much  
wiser   to   use   a   subordinate   window.   By   examining   the   user's   goal,   we   are   naturally   guided   to   an  
appropriate  form  for  the  program.  Instead  of  merely  putting  every  function  in  a  dialog  box,  we  can  
see  that  some  functions  shouldn't  be  enclosed  in  a  dialog  at  all,  others  should  be  kept  on  dialog  that  
is  integral  to  the  main  body  of  the  interface,  and  still  other  functions  should  be  completely  removed  
from  the  program.  
 
Windows  Pollution  
Achieving  many  user  goals  involves  executing  series  of  functions.  If  there  is  a  single  dialog  box  for  
each  function,  things  can  quickly  get  visually  crowded  and  navigation  will  be  confusing.  
 
Windows   Pollution   might   result   when   there   are   separate   windows   for   each   of   the   functions   that  

sarojpandey.com.np     25  of  90  


Visual Programming Language and .NET BIT III

software  performs.  Not  only  the  users  are  bound  to  suffer  from  it,  it  may  also  result  in  the  usage  of  
large  resources  of  the  operating  system.  
 
Adding  a  squirt  of  oil  to  bicycle  makes  it  pedal  easier,  but  it  doesn't  mean  that  dumping  a  gallon  of  oil  
all  over  it  makes  it  pedal  itself.  
 
File  System  
Main  Memory  and  Disk  Storage  
Suppose  we  have  opened  a  document,  make  changes  to  it  and  then  try  to  close  the  application,  we  
see   a   dialog   box   asking,   “Do   you   want   to   save   changes?”   The   reason   why   this   appears   is   that   the  
opened   document   exists   in   two   places   at   the   same   time   –   1.   Main   memory   2.   Disk.   The   program  
issues   the   dialog   box   when   the   user   requests   CLOSE   or   QUIT   because   this   is   the   time   when   it   has   to  
reconcile  the  differences  between  the  copies  of  the  document  in  memory  with  the  copy  on  disk.  
 
The  Save  Changes  dialog  box  assumes  that  saving  and  not  saving  have  equal  occurrence  although  
this  is  rarely  true.  One  might  even  wonder  the  need  of  this  dialog  box.  But  in  situations  where  we  
may  find  ourselves  making  big  changes  to  a  file  mistakenly,  we  can  effectively  use  this  dialog  box  to  
undo  the  changes.  
 
File  System  
The   File   System   is   the   tool   used   by   the   computer   to   manage   data   and   programs   stored   in   the   disks.  
However,  for  a  non-­‐computer  professional,  it  creates  a  large  problem  because  it  influences  on  the  
program  interface  very  deeply.  User  Interface  designers  usually  face  huge  problems  trying  to  hide  
this  for  creating  better  design  for  users  who  are  unaware  of  the  computer  internals.  
 
Graphical   file   managers   like   Windows   Explorer   in   Windows   9x   graphically   represent   the   file  
system,   so   do   most   of   the   software.   This   type   of   design   has   been   a   de   facto   standard   even   though   it  
is  not  the  best  way  of  dealing  with  the  task.  
 
The  tragedy  of  File  System  
The   part   of   the   computer   systems   that   is   the   most   difficult   to   understand   is   the   file   system,   the  
facility  that  stores  programs  and  data  files  on  disk.  
The   difference   between   ‘Main   Memory’   and   ‘   Disk   storage’   is   not   clear   to   most   peoples,   but   the  
developed   software   forces   the   user   to   know   the   difference.   Every   program   exists   in   two   places  

sarojpandey.com.np     26  of  90  


Visual Programming Language and .NET BIT III

once:  In  Memory  and  On  Disk  and  same  thing  are  true  for  every  file,  but  the  user  never  grasps  the  
difference.   When   “Save   Changes?”   dialog   box,   appears,   they   just   suppress   a   pain  of   fear   &   confusion  
and   press   the   YES   button.   The   dialog   box   that   is   always   answered   the   same   way   is   a   redundant  
dialog  box  that  should  be  eliminated.  
 
The   program   issues   the   dialog   box   when   user   requests   CLOSE   or   QUIT   because   that   is   the   time  
when   it   has   to   reconcile   the   difference   between   the   copy   of   the   document   in   memory   with   the   copy  
on  disk.  The  way  the  technology  actually  implements  the  facility  associated  changes  with  the  CLOSE  
and  QUIT  operations,  but  the  user  doesn’t  naturally  see  the  connection.    
 
Computer   geeks   are   very   familiar   with   the   connection   between   saving   changes   and   closing   or  
quitting.   They   don’t   want   to   lose   this   ability   because   it   is   familiar   to   them,   but   familiarity   is   a   really  
bad  design  rationale.  No  body  wants  to  keep  repairing  the  car  just  because  they  are  familiar  with  
the  shop.  
 
The  problems  caused  by  disks  
The   computer’s   file   system   is   the   tool   it   uses   to   manage   data   and   programs   stored   on   disk.   This  
means  the  big  hard  disks  where  most  of  the  information  resides;  it  also  includes  floppy  drive  and  
CD/DVD  ROM.  
 
The   file   manager   in   windows   3.x   and   the   explorer   in   later   windows   graphically   represent   the   file  
system.   The   file   system   and   disk   storage   facility   it   manages   –   is   the   primary   cause   of   disaffection  
with  computers  for  non  computer-­‐professionals.  
 
Disk  and  files  make  user  crazy.  
File  system  is  the  internal  facility  of  windows  but  it  creates  a  large  problem  because  the  influence  of  
the  file  system  on  the  interface  of  the  most  programs  is  very  deep.  The  most  intractable  problems  
facing   user   interface   designers   usually   concern   the   file   system   and   its   demands.   It   affects   menus,  
dialogs,   even   the   procedural   framework   of   the   program   and   this   influence   is   likely   to   continue  
indefinitely  unless  we  make  a  concerned  effort  to  stop  it.  
 
Most  software  treats  the  file  system  in  much  the  same  way  that  the  operating  system  shell  does.  
 
 

sarojpandey.com.np     27  of  90  


Visual Programming Language and .NET BIT III

Following  the  Implementation  Model    


The  implementation  model  of  the  file  system  runs  contrary  to  the  mental  model  of  the  user.  They  
visualize  the  files  or  document  as  typical  documents  in  real  world,  and  they  imbue  them  with  the  
behavioral   characteristics   of   real   world   objects.   Users   visualize   two   salient   facts   about   all  
documents.   First,   there   is   only   one   document;   second,   it   belongs   to   the   user.   The   file   system’s  
implementation   model   violates   both   these   facts:   There   are   always   at   least   two   copies   of   the  
document,  and  both  belong  to  the  program.  
 
Every   data   file,   every   document   and   every   program,   while   in   use   by   the   computer   exists   in  
minimum  of  two  places  at  once:  On  disk  and  in  Main  Memory.  The  user  imagines  the  document  as  a  
bookshelf   but   implementation   on   computer   is   quite   different.   On   computer,   the   disk   drive   is   the  
shelf,  and  main  memory  is  the  place  where  editing  takes  place,  equivalent  to  the  user’s  hands.  But  in  
the  computer  world,  the  digital  journal  doesn’t  come  “off  the  shelf”.  Instead  a  copy  is  made,  and  that  
copy   resides   in   the   memory   while   the   user   is   using   it,   the   user   actually   make   modification   in   the  
document  which  is  currently  in  main  memory  while  the  disk  remains  unchanged.  When  the  user  is  
done  and  closes  the  document,  the  program  is  faced  with  a  decision.  It  must  decide  whether  to  save  
the  changes  or  not.  From  the  programmer’s  point  of  view,  the  choice  can  go  either  way.  From  the  
software’s  implementation  point  of  view,  the  choice  is  same  either  way.  However,  from  the  user’s  
point  of  view,  there  is  rarely  a  decision  to  be  made  at  all.  He  already  made  changes  in  document  and  
now  he  only  wants  to  put  that  away.  There  is  no  logic  on  asking  whether  to  save  or  not.  As  this  is  
compared   to   real   world,   the   user   would   have   pulled   the   paper   journal   off   the   shelf,   penciled   the  
changes  and  now  replacing  it  on  the  shelf  and  the  shelf  suddenly  spoke  up,  asking  if  he  really  want  
to  keep  those  changes!  
 
Dispensing  with  the  disk  model  
If  we  render  the  file  system  according  to  the  user’s  mental  model,  we  gain  some  major  advantages.  
One   is   that   teaching   novices   become   much   easier;   second,   user   interface   designers   won’t   have   to  
incorporate   clumsy   file   system   awareness   into   their   products.   Thus,   designing   would   be   more   goal-­‐
directed  rather  than  according  to  the  needs  of  the  operating  system.  
 

Storage  and  Retrieval  Systems  


Storage  System  

ñ Is  a  tool  for  placing  goods  into  a  repository  for  safekeeping?  

sarojpandey.com.np     28  of  90  


Visual Programming Language and .NET BIT III

ñ Is  composed  of  a  physical  container  and  the  tools  necessary  to  put  objects  in  and  take  back  
out  again.  
 
Retrieval  System  
ñ Is  a  method  for  finding  goods  in  a  repository?  
ñ Is  a  logical  system  that  allows  the  goods  to  be  located  according  to  some  abstract  value,  like  
its  name,  position  or  some  aspects  of  its  contents?  
 
In  the  physical  world,  retrieving  an  item  is  inevitably  linked  with  how  the  item  was  stored.  If  this  
concept   is   also   applied   to   storing   and   retrieving   in   computer   systems,   the   sophisticated   retrieval  
techniques   that   computers   allow   would   not   be   harnessed,   i.e.,   retrieving   in   terms   of   contents,  
modification  etc.    
 
Retrieval  Methods  
There  are  three  fundamental  ways  to  find  a  document.  
1. Positional  Retrieval  –  based  on  place  of  storage  
2. Identity  Retrieval  –  based  on  name  
3. Associative  Retrieval  –  based  on  some  characteristics  of  the  document.  
 
Positional  and  Identity  retrieval  methods  function  as  storage  system  as  well.  
Associative  retrieval  Method  does  not  function  as  a  storage  system.  
 
Existence  of  the  Document  –  The  document  and  the  system  it  lives  in  
In  the  physical  world,  the  existence  of  an  item  is  not  dependent  on  the  storing  system.  For  e.g.  Even  
if  a  book  is  not  placed  on  a  shelf,  it  can  easily  exist.  
On   the   other   hand,   a   disk   file   cannot   exist   without   having   any   association   with   the   file   system   in  
which  it  lives.  
 
Indexing  
Indexing  allows  us  to  build  a  retrieval  system.  For  e.g.  In  libraries,  books  can  be  searched  based  on  
three  indices:  author,  subject  and  title,  each  allowing  the  user  to  find  a  book  in  terms  of  an  inherent  
property.  This  associative  retrieval  method  can  easily  be  implemented  in  computer  system  as  well  
providing  lot  more  powerful  retrieving.  
 

sarojpandey.com.np     29  of  90  


Visual Programming Language and .NET BIT III

Associative  Retrieval  System  


An  Associative  retrieval  system  enables  us  to  find  documents  by  their  contents  and  also  help  the  
user  create  temporary  or  permanent  groups  of  documents  and  user  them  as  the  basis  for  searches.  
A  good  associative  retrieval  system  would  also  allow  the  user  to  search  in  terms  of  some  attributes  
assigned  dynamically  to  each  document  by  the  system  itself.  
For  e.g.  a  document  can  be  retrieved  based  on  –  
1. The  program  that  created  the  document.  
2. Document  type  –  text,  spreadsheet,  database  
3. Size  
4. Last  modified  time  
5. How  often  the  documents  have  been  viewed,  edited,  printed  etc.  
6. Whether  the  document  has  been  printed,  emailed,  faxed  etc.  
 
Document-­‐centric  vs.  File-­‐centric  
In  a  document-­‐centric  world,  documents  are  independent  of  any  particular  program,  i.e.,  instead  of  
MS  Word  or  Excel  documents,  we  would  have  generic  documents,  which  could  be  worked  with  any  
word  processor  or  spreadsheet  program.  
 
In  the  file-­‐centric  world,  applications  can  only  work  on  specific  file  formats  that  they  recognize  
which  make  data  exchanging  problematic.    
 
Choosing  Platforms  
Every   software   designer   has   to   decide   on   which   platform   the   software   is   going   to   be   run   on.   A  
choice   has   to   make   whether   to   write   for   Intel/Microsoft   (DOS   or   Windows)   or   UNIX   or   the  
Macintosh  or  for  all  of  them.  He  must  also  decide  whether  to  support  older  hardware  or  just  build  
software  for  the  new,  powerful  machines.  These  choices  are  difficult  to  decide  on  but  the  best  way  
to  go  about  is  by  taking  a  middle  path.  
 
Software  is  the  expensive  part.  
 
Modern   desktop   computers   should   be   treated,   as   consumables,   like   paper   clips   and   stationery,  
rather   than   fixed   assets   or   durable   goods,   like   buildings   or   desks.   The   reason   for   this   is   not   that  
these   computers   would   stop   working   within   a   few   years,   but   due   to   the   rapidly   progressing  
technology   resulting   in   interaction   problems   and   consequently   lower   productivity.   Thus   keeping  

sarojpandey.com.np     30  of  90  


Visual Programming Language and .NET BIT III

older   desktop   computer   for   critical   roles   in   the   business   environment   might   have   a   disastrous  
effect.   It's   like   making   the   cross-­‐country   trip   by   bus   instead   of   flying:   its   penny   wise   and   pound-­‐
foolish.  
 
There   are   enormous   costs   associated   with   keeping   computers   beyond   their   useful   and   most  
productive   times.   This   is   due   to   the   interaction   problem   between   the   aging   hardware   and   the  
software.   A   typical   PC   has   dozens   of   hardware   and   software   components,   and   the   probability   of  
incompatibilities  between  them  grows  exponentially  as  the  system  ages  and  new  components  are  
added.  
 
Choosing  a  developmental  platform  
Many   development   teams   create   software   that   will   accommodate   all   existing   hardware.   Their  
management  usually  colludes  in  this  error  be  encouraging  them  to  support  the  five  or  six  or  even  
more  older  computers  that  are  still  ticking  away  in  corporate  offices,  arguing  that  it  would  be  too  
expensive  to  replace  all  those  computers.  This  ignores  the  fact  that  the  cost  of  developing  software  
to   support   both   old   and   new   hardware   is   generally   significantly   greater   than   the   cost   of   purchasing  
and  supporting  the  more  powerful  new  hardware.  If  the  software  is  written  to  accommodate  those  
old   computers,   it   will   save   money   on   hardware   just   to   spend   it   on   software,   resulting   in   much  
stupider  software  at  greater  cost.  It  should  be  the  responsibility  of  the  management  to  ensure  that  
the  computers  on  desktops  throughout  the  company  are  as  can  be  when  the  new  software  is  ready.  
 
Purchase  the  right  software;  then  buy  the  computer  to  run  it.  
 
To  develop  software  for  modern  platforms,  the  designer  must  design  the  software  for  the  hardware  
that  will  be  readily  available  six  to  twelve  months  after  its  release.  So  predictions  should  be  made  
on  which  hardware  would  be  mostly  used  during  which  the  software  will  be  ready.  i.e.,  the  software  
should  be  designed  to  perform  optimally  on  the  hardware  that  does  not  exist  yet.      
 
The   best   performance   can   be   achieved   if   the   hardware   and   software   components   would   work  
efficiently   together.   So,   if   we   have   a   specialized   software   costing   lots   of   money,   the   developers  
should   have   the   proper   hardware   for   it.   The   correct   choice   might   be   that   the   software   is   purchased  
first  and  only  then  buy  the  hardware  that  runs  it.  
 

sarojpandey.com.np     31  of  90  


Visual Programming Language and .NET BIT III

Simultaneous  Multi-­‐platform  development  


Want  to  kill  two  birds  with  single  stone?  Don't  do  it.  
 
Don't  do  simultaneous  multiplatform  development,  it  is  not  worth  it.  Instead,  develop  only  for  the  
primary  market.  Then  use  its  revenue  to  port  to  secondary  platform.  
 
Simultaneous  Multi-­‐platform  development  is  not  recommended  because  of  two  reasons.    
1) Coding  becomes  complicated.  
2) Interfaces  get  homogenized.  
 
Anything  that  increases  the  complexity  of  source  code  should  be  avoided  at  all  costs.  It  will  magnify  
the   time   it   takes   both   to   write   as   well   as   debug.   The   software   development   manager   must   avoid  
uncertainty  and  delay.  Simultaneous  multi  platform  development  generates  more  uncertainty  and  
delay   than   any   other   tactic.   If   the   coding   becomes   complicated,   the   time   required   for   writing   and  
debugging   get   elongated   which   no   software   company   wants.   There   are   several   libraries   available  
for   the   multi   platforms   software   development.   Those   homogeneous   or   generic   interfaces   may   be  
good   for   the   developers   but   the   users   will   surely   dislike   it.   The   reason   behind   this   is   simple.  
Windows   users   prefer   the   usual   Windows   interface   and   Macintosh   users   prefer   its   own   interface  
even  though  both  are  GUIs.  Macintosh  user  wants  Mac  sensibility  and  same  case  for  the  windows  
user  too.  
 
In  the  quiet  of  the  office  it  seems  so  harmless,  so  easy  to  add  a  few  'if-­‐else'  statements  to  the  source  
code   and   magically   reap   the   benefits   of   supporting   an   extra   hardware   platform.   This   is   foolish  
thinking.  Everything  in  the  already  problematic  discipline  of  software  development  becomes  harder  
and  more  complex.  Each  design  decision  must  now  be  made  for  two  platforms.  Compromises  slip  
into   the   product   to   account   for   the   disparity   between   the   two.   If   writing   for   dual   platforms  
increases   the   amount   of   code   by   only   5%,   it   can   increase   the   time   to   market   by   a   third.   This   is  
incredibly  costly.  
 
Don't  hamper  the  primary  market  by  serving  the  secondary  market.    
A  safer  approach  to  solve  this  problem  is  by  developing  for  a  single  platform  first  i.e.  the  primary  
market.   Then   start   developing   for   another   platform   while   generating   revenue   from   that   earlier  
version.  The  idea  here  is  that  the  developers  should  not  hamper  the  primary  markets  by  serving  the  
secondary  markets.  

sarojpandey.com.np     32  of  90  


Visual Programming Language and .NET BIT III

 
This   does   not   means   that   we   have   to   abandon   the   secondary   market.   The   working   model   of   the  
product,   suppose   running   on   windows   can   be   used   as   the   prototype   for   the   development   of   it   in  
another  platforms.  At  that  time  the  product  vision  will  be  clear  so  the  development  time  and  other  
efforts  will  also  be  decreased.  Less-­‐experienced  programmer  can  also  be  used  because  it’s  easy  to  
do  clone  programming  where  very  less  design  work  is  involved.  
 
Interoperability  
Interoperability   is   the   ability   of   a   system   or   a   product   to   work   with   other   systems   or   products  
without  special  effort  on  the  part  of  the  customer.    
 
Making  software  interoperable  between  platforms  is  not  a  good  choice  to  make  for  designers,  like  
making  a  Windows  version  to  work  in  the  same  way  as  the  DOS-­‐only  version.  The  program  should  
be  designed  solely  for  the  target  platform.      
 
Interoperability  builds  choice  so  governments,  developers,  and  citizens  can  decide  what  
technologies  work  best  for  them.  It  drives  innovation  within  a  thriving  IT  industry,  creating  
technologies  that  improve  citizen  services  and  government  efficiency.  
 
Windows  users  use  windows  because  they  like  it  and  because  they  don't  like  Mac  or  Linux.  Mac  or  
Linux   user   uses   it   because   they   don't   like   windows.   If   a   windows   application   acts   like   the   Mac  
application  they  windows  user  will  be  unhappy  with  it  and  vice  versa.  
 
The  Program  should  be  designed  expressly  for  the  target  platform.  
 

sarojpandey.com.np     33  of  90  


Visual Programming Language and .NET BIT III

4.  USER  COMPUTER  INTERACTION  


Mouse  
Indirect  Manipulation  

As   the   mouse   is   rolled   around   the   desktop,   a   visual   symbol   is   shown   which   is   called   the   cursor.  
While  moving  the  mouse  left  and  right,  the  cursor  also  moves  left  and  right.  And  same  with  up  and  
down  action.  

 The   motion   of   the   mouse   to   the   cursor   is   not   one   to   one,   instead   the   motion   is   proportional.   On  
most   PCs,   the   cursor   crosses   an   entire   30-­‐centimeter   screen   is   about   4-­‐centimeters   of   mouse  
movement.  

The   term   “direct   manipulation”   is   used   while   we   talk   about   the   mouse   movement   because   we  
actually   manipulate   those   things   indirectly   where   a   light   pen   directly   points   to   the   screen.   So   its  
called  a  direct  manipulation.  With  the  mouse,  however  we  only  manipulate  the  mouse  on  the  desk  
not  the  object  on  the  screen.  

− Some   people   find   it   difficult   to   manipulate   a   mouse.   Cooper   gives   them   a   name  
'elephants'.  A  good  percentage  of  computer  users  are  elephants  so  the  program  should  be  
designed  with  alternative.  

− The  person  who  is  the  antithesis  of  elephant  and  really  loves  mouse,  cooper  calls  
them  'minnie'.  

Mouse  Events  Focus  and  Cursor  Hints  

The  inventors  of  mouse  tried  to  figure  out  how  many  buttons  to  put  on  it  and  couldn't  agree.  
Some  said  one  button  was  correct,  while  others  want  two  buttons.  Some  advocated  mouse  
with  several  buttons.  

Will  all  the  discussions,  ultimately,  Apple  settled  on  one  button  mouse  for  their  macintosh,  
while  other  agreed  upon  two  buttons  mouse.  

Mouse  Buttons  

-­‐  Left  Mouse  Button  

One  of  the  major  drawbacks  of  the  Macintosh  is  its  single-­‐button  mouse.  The  left  mouse  button  is  
used   for   all   of   the   major   direct-­‐manipulation   functions   of   triggering   controls,   making   selections,  
drawing  etc.  By  deduction  this  means  that  the  functions  the  left  button  doesn't  support  must  be  the  
sarojpandey.com.np     34  of  90  
Visual Programming Language and .NET BIT III

non-­‐major  functions.  The  nan-­‐major  functions  either  resides  on  the  right  mouse  button  or  are  not  
available  by  direct  manipulation,  residing  only  on  keyboard  or  menu.  

The  most  common  meaning  of  left  mouse  button  is  activation  or  selection.  For  a  control  such  as  a  
push  buttons  or  check-­‐boxes,  the  left  mouse  button  means  pushing  the  button  or  checking  the  box.  
If  you  are  left  clicking  in  data,  the  left  button  generally  means  selections.  

 
-­‐  Right  Mouse  Button  

The  right  mouse  button  was  long  treated  as  nonexistent  by  Microsoft  and  many  others.  Only  some  
developers   connected   actions   to   the   right   mouse   button.   When   Borland   Int'l   embraced   object-­‐
orientation   on   a   company-­‐wide   basis,   they   used   right   mouse   button   to   show   the   dialog   box   that  
showed   the   properties   of   the   object.   At   that   Macs   have   only   single   button   and   Microsoft   was   not  
keeping  any  functionality  to  the  right  button.  Later  with  Windows  95,  Microsoft  started  using  right  
mouse  buttons.  Non-­‐major  functions  are  kept  mostly  in  right  button  click.  

 
-­‐  Middle  Mouse  Button  

Although  application  vendors  can  confidently  expect  a  right  mouse  button,  they  can't  depend  on  the  
presence  of  a  middle  mouse  button.  Because  of  this  no  vender  have  focused  on  the  functionality  of  
middle  mouse  button.  Users  work  with  almost  all  functionality  of  the  system  with  the  use  of  left  and  
right  mouse  buttons.  

Mouse  Events  

Physically  there  aren't  lot  of  things  we  can  perform  with  a  mouse.  We  can  move  it  around  to  point  
different  things  and  press  the  buttons.    

Mouse  actions  can  be  altered  by  using  the  Meta  keys:  CTRL,  SHIFT  &  ALT.  The  mouse  events  that  
can  occur  on  windows  are  as  follows:  

1. Point  (Point)  

  The   user   move   the   mouse   until   its   corresponding   on-­‐screen   cursor   point   to,   or   placed  
  over  the  desired  object.  

sarojpandey.com.np     35  of  90  


Visual Programming Language and .NET BIT III

2. Point,  Click,  Release  (Click)  

  While   the   user   holds   the   mouse   in   a   steady   motion,   he   clicks   the   button   down   and  
  releases   it.   This   action   is   defined   as   triggering   a   state   change   in   a   gizmo,   or   selecting   an  
  object.  

Single-­‐click  selects  data  or  changes  the  gizmo  state.  

For  a  push  button  gizmo,  a  state  change  means  that  while  the  mouse  button  is  down  and  directly  
over   the   gizmo,   the   button   will   enter   and   remain   in   the   pushed   state.   When   the   mouse   button   is  
released,  the  button  is  triggered,  and  its  associated  action  occurs.  If  the  user  while  still  holding  the  
mouse   button   down   moves   the   cursor   off   the   gizmo,   the   push   button   gizmo   returns   to   its   not-­‐
pushed  state  and  when  the  user  then  releases  the  mouse  button,  nothing  happens.  This  provides  a  
convenient  escape  route  if  the  user  changes  his  mind.    

3. Point,  Click,  Drag,  Release  (Click-­‐and-­‐Drag)  

This  is  versatile  mouse  operation.  It  has  many  common  uses  including  selecting,  reshaping,  
repositioning,  drawing  and  dragging-­‐and-­‐dropping.  

4. Point,  Click,  Release,  Click,  Release  (Double  Click)  

Double-­‐click   is   composed   of   two   single   click.   The   first   thing   a   double-­‐click   should   do   is   the   same  
thing   that   a   single-­‐click   does.   This   is   indeed   its   meaning   when   the   mouse   is   pointing   into   data.  
Single  clicking  selects  something;  double-­‐clicking  selects  something  and  takes  action  on  it.  

  Double-­‐clicking   on   data   is   well   defined;   double-­‐clicking   on   most   gizmos   has   no   meaning.  


Many  gizmos  don’t  discard  the  extra  click  but  just  ignore  it.  Depending  upon  the  gizmo,  this  can  be  
problematic.  If  the  gizmo  is  a  toggle  button,  double-­‐click  will  return  its  state  to  the  same  at  the  time  
of  clicking.  

5. Point,  Click,  Click  other  button,  release  (Chord  Click)  

  Chord  Clicking  means  pressing  two  buttons  simultaneously,  although  they  don't  have  to  be  
either  pressed  to  released  at  precisely  the  same  time.  To  qualify  as  a  chord  click,  the    second   mouse  
button  must  be  pressed  at  some  point  before  the  first  mouse  button  is  pressed.  

  There  are  two  variants  to  chord  clicking.  The  first  is  the  simplest,  whereby  the  user  merely  
points  to  something  and  presses  both  buttons  at  the  same  time.  This  idiom  is  very  clumsy  and  has  

sarojpandey.com.np     36  of  90  


Visual Programming Language and .NET BIT III

not   found   much   currency   in   existing   software,   although   some   creativity   desperate   programmers  
have  implemented  it  as  a  substitute  for  a  shift  key  on  selection.  

The   second   variant   is   using   chord   clicking   to   terminate   a   drag.   The   drag   begins   as   a   simple,   one  
button  drag;  then  the  user  adds  the  second  button.  Although  this  technique     sounds   more   obscure  
than  the  first  variant,  it  actually  has  found  wider  acceptance  in  the  industry.  It  is  perfectly  suited  for  
canceling  drag  operations.  

6. Point,  Click,  Release,  Click,  Release,  Click,  Release  (Triple  Click)  

Some  respectful  programs  have  an  action  that  involves  triple  clicking.   Triple  Clicking  can  challenge  
even  the  minnies  with  a  high  level  of  manual  skill.  In  word,  triple  clicking  is  used  to  select  entire  
paragraphs.   A   single   click   selects   a   character;   a   double   click   selects   the   word   and   triple   click   selects  
a   paragraphs.   For   horizontal,   sovereign   applications   with   extremely   broad   user   populations   like  
word  processors,  spreadsheets,  triple  clicking  can  be  worth  implementing.  For  any  program  that  is  
used  less  frequently,  it  is  silly  to  use  triple  clicking.  

7. Point,  Click,  Release,  Click,  Drag,  Release  (Double  Drag)  

Double  dragging  is  another  minnie-­‐only  idiom.  Faultlessly  executing  a  double  click  and  drag  can  be  
like  patting  the  head  and  rubbing  the  stomach  at  the  same  time.  Like  triple  clicking  it  is  also  useful  
only  in  mainstream,  horizontal  sovereign  applications.  

Double  dragging  is  used  in  word  as  a  selection  tool.  A  user  can  double  click  in  the  text  to  select  an  
entire   word,   so   expanding   that   function;   s/he   can   expand   the   selection   word-­‐by-­‐word   by   double  
dragging.  

Up  &  Down  Events  

Each   time   the   user   presses   a   mouse   button,   the   program   must   deal   with   two   discrete   events:   the  
button-­‐down  event  and  the  button  up  event.  With  the  bold  lack  of  consistency  exhibited  elsewhere  
in  the  world  of  mouse  management,  the  definitions  of  the  actions  to  be  taken  on  button-­‐down  and  
button-­‐up  can  vary  with  the  context  and  from  program  to  program.  These  actions  should  be  made  
rigidly  consistent.  

sarojpandey.com.np     37  of  90  


Visual Programming Language and .NET BIT III

When  selecting  any  object,  the  selection  should  always  take  place  on  the  button  down,  because  the  
button  down  is  the  first  step  in  dragging  sequence.  Any  objects  cannot  be  dragged  without  clicking  
on  it.  

If  the  cursor  is  placed  over  a  gizmo  rather  than  the  selection  data,  the  action  on  the  button-­‐down  
event   is   to   tentatively   activate   the   gizmo's   state   transition.   When   the   gizmo   sees   the   button-­‐up  
event,  it  then  commits  to  the  state  transition.  

  Button-­‐down  means  propose  action;  button-­‐up  means  commit  to  action  over  gizmos.  

Cursor  

The  cursor  is  the  visible  representation  on  the  screen  of  the  mouse's  position.  It  is  normally  a  small  
arrow  pointing  slightly  West  of  North  but  it  can  differ.  Normally  the  size  of  cursor  is  32X32  pixels.  

Despite   its   32X32   size,   the   cursor   must   click   on   the   single   pixel,   there   is   a   way   for   the   cursor   to  
indicate  precisely  which  pixel  is  the  one  pointed  to.  This  is  accomplished  by  always  designating  one  
single   pixel   of   any   cursor   as   actual   locus   of   pointing,   called   Hotspot.   In   the   standard   arrow,   the  
hotspot   is   the   tip   of   the   arrow.   Regardless   of   the   size   and   shape   of   the   cursor,   hotspot   is   always  
single  pixel.  

As  the  mouse  is  moved  across  the  screen,  some  things  that  the  mouse  points  to  are  inert:  clicking  
the   mouse   button   while   the   cursor's   hotspot   is   over   them   provokes   no   action.   Any   object   or   area   of  
the  screen  that  reacts  to  a  mouse  action  is  called  pliant.  A  push-­‐button  gizmo  is  pliant  because  it  
can  be  'pushed'  by  the  mouse  cursor.  Any  object  that  can  be  picked  up  and  dragged  is  pliant,  thus  
any  directory  or  file  icon  in  the  File  Manager  or  Explorer  is  pliant.  In  fact,  every  cell  in  spreadsheet  
and  every  character  in  text  is  also  pliant.  

Hinting  

There  are  three  basic  ways  to  communicate  the  pliancy  of  an  object  to  the  user:  by  the  static  visual  
affordance   of   the   object   itself,   it’s   dynamically   changing   visual   affordances,   or   by   changing   the  
visual  affordances  of  the  cursor  as  it  passes  over  the  object.    

If   the   pliancy   of   the   object   is   communicated   by   the   static   visual   affordance   of   the   object   itself,   these  
all   are   called   static   hinting.   Static   visual   hinting   merely   indicates   the   way   the   object   is   drawn   on  
the   screen.   For   example,   the   three-­‐dimensional   sculpting   of   a   push-­‐button   is   static   visual   hinting  
because  its  manual  affordance  for  pushing.  

sarojpandey.com.np     38  of  90  


Visual Programming Language and .NET BIT III

Some   visual   objects   that   are   pliant   are   not   obviously   so,   either   because   they   are   too   small   or  
because   they   are   hidden.   If   the   directly   manipulable   object   is   out   of   the   central   area   of   the  
program's  face,  the  side  posts,  scrollbars  or  status  bar  that  the  object  is  directly  manipulable.  This  
case  calls  for  more  aggressive  visual  hinting,  which  is  called  active  visual  hinting.  

  When   the   user   passes   the   cursor   over   the   pliant   object,   the  object   changes   its   appearance   with  
an   animated   motion.   Those   actions   occur   when   the   cursor   passes   over   the   object,   before   any   mouse  
buttons  are  pressed.  

Cursor  Hinting  

If  the  pliancy  of  the  object  is  communicated  by  a  change  in  the  cursor  as  it  passes  over,  its  called  
cursor   hinting.   Because   the   cursor   is   dynamically   changing,   all   cursor   hinting   is   active   cursor  
hinting.  

Most   popular   software   intermixes   visual   hinting   and   cursor   hinting   freely.   For   example,   push  
buttons  are  rendered  three-­‐dimensionally  and  the  shading  clearly  indicates  that  the  object  is  raised  
and   affords   to   be   pushed;   when   the   cursor   passes   over   the   raised   button,   however   it   doesn't  
change.  On  the  another  hand,  When  the  cursor  passes  over  a  pluralized  window's  thick-­‐frame,  the  
cursor   changes   to   a   double-­‐ended   arrow   showing   the   axis   in   which   the   window   edge   can   be  
stretched.  

Although   cursor   hinting   usually   involves   changing   the   cursor   to   some   shape   that   indicates   what  
type   of   direct   manipulation   is   acceptable,   its   most   important   role   is   in   making   it   clear   to   the   user  
that  the  object  is  pliant.  Its  difficult  to  make  data  visually  hints  at  its  pliancy  without  disturbing  its  
normal  representation,  so  cursor  hinting  is  the  most  effective  method.  Some  gizmos  are  small  and  
difficult  for  users  to  spot  as  readily  as  a  button  or  buttcon,  and  cursor  hinting  is  vital  for  the  success  
of  such  gizmos.  The  column  dividers  and  screen  splitters  in  MS  Excel  are  good  examples  of  this.  

Wait  Cursor  Hinting  

Another   type   of   cursor   hinting   is   called   wait   cursor   hinting.   Whenever   the   program   is   doing  
something   that   takes   significant   amount   of   time   in   human   terms-­‐like   accessing   the   disk   or  
rebuilding  directories-­‐the  program  changes  the  cursor  into  a  visual  indication  that  the  program  has  
gone   stupid.   In   windows   this   image   is   familiar   hourglass.   Other   operating   systems   have   used  
wristwatch,   spinning   balls   and   steaming   cups   of   coffee   etc.   Informing   the   user   when   the   program  

sarojpandey.com.np     39  of  90  


Visual Programming Language and .NET BIT III

becomes   stupid   is   a   good   idea,   but   the   cursor   isn't   the   right   tool   for   the   job.   After   all,   the   cursor  
belongs  to  everybody,  and  not  to  any  particular  program.  

The  user  interface  problem  arises  because  the  cursor  belongs  to  the  system  and  is  just  borrowed  by  
a  program  when  it  invades  that  program's  airspace.  In  a  non-­‐preemptive  system  like  Windows  3.x,  
using  the  cursor  to  indicate  the  wait  is  a  reasonable  idiom  because  when  one  program  gets  stupid,  
they  all  get  stupid.  

In   the   preemptive   multi-­‐tasking   world   of   windows   95,   when   one   program   gets   stupid,   it   won't  
necessarily  make  other  running  programs  get  stupid,  and,  if  the  user  points  to  one  them,  it  will  need  
to   use   the   cursor.   Therefore,   the   cursor   cannot   be   used   to   indicate   a   busy   state   for   any   single  
program.  

Focus  

Focus   is   an   obscure   state   that   is   so   complex,   it   has   confounded   more   than   one   former   windows  
programming   experts.   Windows   is   multitasking   system,   which   means   that   one   program   can   be  
performing   useful   work   at   any   given   time.   Regardless   of   the   dispatching   algorithm,   though   no  
matter   how   many   programs   are   running   concurrently,   only   one   program   can   be   in   direct   contact  
with   the   user   at   a   time.   This   is   why   the   concept   of   focus   was   derived.   Focus   indicates   which  
program   will   receive   the   next   input   from   the   user.   The   active   program   is   the   one   with   the   most  
prominent  caption  bar  (it's  usually  dark  blue  or  what  ever  personalized  color).  The  program  with  
the   focus   will   receive   the   next   keystroke.   A   normal   keystroke   has   no   location   component,   the   focus  
cannot   change   because   of   it,   but   a   mouse   button   press   does   have   a   location   component   and   can  
cause   the   focus   to   change   as   a   side   effect   of   its   normal   command.   A   mouse   click   that   changes   the  
focus  is  called  new-­‐focus  click.  

If  the  mouse  is  clicked  somewhere  in  a  window  that  already  has  the  focus,  that  action  is  called  in-­‐
focus  click  and  there  is  no  change  in  the  window  focus.  

Meta-­‐keys  

Using  one  of  the  various  meta-­‐keys  in  conjunction  with  the  mouse  can  extend  direct-­‐manipulation  
idioms.  Meta  keys  include  the  CONTROL  key,  the  ALT  key  and  either  of  the  two  SHIFT  keys.  

sarojpandey.com.np     40  of  90  


Visual Programming Language and .NET BIT III

Meta-­‐key  cursor  hinting  

Using   cursor   hinting   to   show   the   meanings   of   meta-­‐keys   is   an   all-­‐around   good   idea   and   more  
programs   should   do   it.   This   is   something   that   must   be   done   dynamically.   As   the   meta-­‐keys   goes  
down,  the  cursor  should  change  immediately  to  reflect  the  new  intention  of  the  idiom.  

ALT  meta-­‐key    

 The   ALT   meta-­‐key   is   the   problem-­‐child   of   the   family.   Microsoft   has   avoided   imbuing   it   with  
meaning,  so  it  has  been  rather  a  rudderless  ship  adrift  in  a  sea  of  clever  programmers,  who  use  it  as  
the  whim  strikes  and  ignore  it  otherwise.  

Selection    

There  are  basically  only  two  things  that  can  be  done  with  a  mouse:  Choose  something  and  choose  
something  to  do  to  the  chosen  object.  Those  choosing  actions  are  referred  to  as  selection.  

A  fundamental  issue  in  user  interfaces  is  the  sequence  in  which  commands  are  issued.  Most  every  
command   has   an   operation   and   one   or   more   operands.   The   operation   describes   what   action   will  
occur   and   the   operand   are   programmer's   terms;   interface   designer   prefer   to   borrow   linguistic  
terminology,  referring  to  the  operation  as  the  verb,  and  the  operand  as  the  object.  We  can  specify  
the  verb  first,  followed  by  the  object  or  object  first  followed  by  the  verb.  These  are  commonly  called  
verb-­‐object   and   object-­‐verb   orders,   respectively.   Either   order   is   good,   and   modern   user   interface  
typically  use  both.  

In   the   days   when   language   compilers   like   COBOL   and   FORTRAN   were   the   bee's   knees   in   high  
technology,   all   computer   languages   used   verb-­‐object   ordering.   A   typical   statement   went   like   this:  
PERFORM   ACTION   X   ON   Y.   The   verb   PERFORM   ACTION,   came   before   the   objects   X   &   Y.   This  
ordering   was   intended   to   follow   the   natural   formations   of   the   English   language.   In   the   world   of  
linguistic   processing,   though   this   actually   wasn't   convenient,   as   the   computer   doesn't   like   this  
notation.   Compiler   writers   put   considerable   effort   into   swapping   things   around,   making   it   easier   to  
turn  the  human-­‐readable  source  code  into  machine-­‐readable  executable  code.  But  there  was  never  
any   question   that   verb-­‐object   ordering   was   the   right   way   to   present   things   to   the   user   –   the  
programmer   –   because   it   was   clear   and   natural   and   effective   for   written,   text-­‐oriented  
communication  with  the  computer.  

sarojpandey.com.np     41  of  90  


Visual Programming Language and .NET BIT III

When   graphical   user   interfaces   emerged,   it   became   clear   that   verb-­‐object   ordering   created  
problem.   In   an   interactive   interface,   if   the   user   chooses   a   verb,   the   system   must   then   enter   a   state   –  
that  differs  from  the  norm:  waiting  for  an  object.  Normally,  the  user  will  then  choose  an  object  and  
all  will  be  well.  However,  if  user  wants  to  act  on  more  than  one  object,  how  does  the  system  know  
this?  It  can  only  know  if  the  user  tells  in  advance  how  many  operands  he  will  enter,  which  violates  
the   axiom   of   not   requiring   the   user   permission   to   ask   a   question.   Otherwise,   the   program   must  
accept  all  operands  until  the  user  enter  some  special  object-­‐list-­‐termination-­‐command,  also  a  very  
clumsy  idiom.  

By  swapping  the  command  order  to  object-­‐verb,  we  don't  need  all  of  that  complex  termination  stuff.  
The   user   merely   selects   which   object   will   be   operated   upon   and   then   indicates   which   verb   to  
execute   on   them.   The   software   very   simply   executes   the   indicated   function   on   the   selected   data.  
Notice,  though,  that  a  new  concept  has  crept  into  the  equation  that  it  didn't  exist  –  wasn't  needed  in  
verb-­‐object  world.  That  new  concept  is  called  selection.  

With   the   verb-­‐object   mechanism,   rather   than   the   program   remembering   the   verb   while   the   user  
specifies  one  or  more  objects,  we  are  asking  the  program  to  remember  one  or  more  objects  while  
the   user   chooses   the   verb.   This   way,   however   we   need   a   mechanism   for   identifying,   marking   and  
remembering   the   chosen   operands.   Selection   is   the   mechanism   by   which   the   user   informs   the  
program  which  objects  to  remember.  

The  object-­‐verb  model  can  be  difficult  to  understand  intellectually,  but  selection  is  an  idiom  that  is  
very   easy   to   grasp   and   once   shown,   rarely   forgotten.   As   per   the   linguistic   rules   of   English   language,  
it   is   nonsensical   that   we   must   choose   an   object   first.   On   the   other   hand,   we   use   this   model  
frequently   in   our   non-­‐linguistic   actions.   We   purchase   groceries   by   first   selecting   the   objects   –   by  
placing   then   on   shopping   cart   –   then   specifying   the   operation   (Bring   the   cart   to   the   checkout  
counter  and  expressing  our  desire  to  purchase.)  

In  a  non-­‐interactive  interface,  like  a  modal  dialog  box,  the  concept  of  selection  isn't  always  needed.  
Dialog  boxes  naturally  come  with  one  of  that  object-­‐list-­‐termination-­‐command:  the  OK  button.  The  
user  can  choose  a  function  first  and  an  object  second  OR  vice  versa  because  whole  operations  won't  
actually  occur  until  the  confirming  OK  is  pressed.  This  is  not  to  say  that  object-­‐verb  ordering  isn't  
used   in   most   dialog   boxes.   It   merely   shows   that   no   particular   command   ordering   isn't   used   in   most  
dialog   boxes.   It   merely   shows   that   no   particular   command   ordering   has   a   divine   right;   the   two  
ordering  have  strength  and  weaknesses  that  complement  each  other  in  the  complex  world  of  user  

sarojpandey.com.np     42  of  90  


Visual Programming Language and .NET BIT III

interface.  Both  are  powerful  tools  for  the  software  designer  and  should  be  used  where  they  are  best  
suited.  

In  it's  simple  variant,  selection  is  trivial;  the  user  points  to  a  data  object  with  the  mouse  cursor,  click  
and   object   is   selected.   This   operation   is   deceptively   simple   and   in   practice,   many   interesting  
variants  are  exposed.  

 
Concrete  and  discrete  data  

Users  select  data,  not  verbs.  Selection  of  objects  and  selection  of  data  can  be  done  with  same  type  of  
click   action.   The   basic   variants   of   selection   depend   on   the   basic   variants   of   selectable   data,   and  
there  are  two  broad  categories  of  data.  

Some  programs  represent  data  as  distinct  visual  objects  that  can  be  manipulated  independently  of  
other   objects.   The   icons   in   the   Program   Manager   and   graphics   objects   in   draw   programs   are  
examples.   These   objects   are   also   selected   independently   of   each   other.   They   are   discrete   data;  
selection   on   discrete   data   is   called   discrete   selection.   Discrete   data   is   not   homogeneous,   and  
discrete  selection  is  not  necessarily  contiguous.  

Conversely,  some  programs  represent  their  data  as  a  matrix  of  many  little  contiguous  pieces  of  data.  
The  text  in  a  word  processor  or  the  cells  in  a  spreadsheet  are  concretions  of  hundreds  or  thousands  
of  similar  little  objects  that  together  form  a  coherent  whole.  These  objects  are  often  selected  in  solid  
groups,  so  that  is  called  concrete  data  and  selection  within  them  is  called  concrete  selection.  

Both  concrete  and  discrete  selection  support  both  single-­‐click  and  click-­‐and-­‐drag  selection.  Single  
clicking   selects   the   smallest   possible   discrete   amount,   and   clicking-­‐and-­‐dragging   selects   some  
larger  quantity,  but  there  are  significant  differences.  

Insertion  and  Replacement  

Selection   indicates   which   data   the   next   function   will   operate   on.   If   that   next   function   is   a   write  
command,  the  incoming  data  (Keystrokes  or  a  PASTE  command)  writes  onto  the  selected  data.  In  
discrete  selection,  one  or  more  discrete  objects  are  selected,  and  the  incoming  data  is  handed  to  the  
selected  discrete  objects,  which  process  them  in  their  own  way.  This  may  cause  a  REPLACEMENT  
action,  where  the  incoming  data  replaces  the  selected  object.  Alternatively,  the  selected  object  may  
treat  the  incoming  data  as  fodder  for  some  standard  function.  

sarojpandey.com.np     43  of  90  


Visual Programming Language and .NET BIT III

In  concrete  selection,  however  the  incoming  data  always  replaces  the  currently  selected  data.  In  a  
word   processor,   when   the   user   type,   he   replace   what   is   selected   with   the   typed   text.   Concrete  
selection   exhibits   a   unique   quirk   related   to   insertion,   where   the   selection   can   shrink   down   to   a  
single   point   that   indicates   a   place   between   two   bits   of   data,   rather   than   one   or   more   bits   of   data.  
This   in   between   place   is   called   insertion   point.   In   a   word   processor,   the   blinking   caret   is   essentially  
the   least   amount   of   concrete   selection   available:   a   location   only.   It   just   indicates   a   position   in   the  
data  between  two  atomic  elements,  without  actually  selecting  either  one  of  them.  By  pointing  and  
clicking   anywhere,   the   caret   can   be   moved   easily   but   if   mouse   is   dragged   to   extend   the   selection,  
the  blinking  caret  disappears  and  is  replaced  by  contiguous  selection.  

Another   way   to   think   of   insertion   point   is   a   null   selection.   By   definition,   typing   into   a   selection  
replaces  that  selection  with  new  text,  but  if  the  selection  is  null,  new  text  replaces  nothing;  they  are  
simply  inserted.  So,  we  can  say,  Insertion  is  the  trivial  case  of  replacement.  

Mutual  Exclusion  

When   a   selection   is   made,   any   previous   selection   is   unmade.   This   behavior   is   called   mutual  
exclusion,   as   the   selection   of   one   excludes   the   selection   of   the   other.   Typically,   the   user   clicks   on   an  
object,  and  it  becomes  selected.  The  object  remains  selected  until  the  user  selects  something  else.  
Mutual  exclusion  is  the  rule  in  both  discrete  and  concrete  selection.  

Some   discrete   systems   allow   a   selected   object   to   be   deselected   by   clicking   it   a   second,   canceling  
time.   This   can   lead   to   a   curious   condition   in   which   nothing   at   all   is   selected,   and   there   is   no  
insertion  point.  

Additive  selection  

Concrete-­‐selection   program   can't   be   imagined   without   mutual   exclusion,   because   the   user   cannot  
see  or  know  what  effect  his  actions  will  have  if  his  selections  can  readily  be  scrolled  of  the  screen.  
Imagine,   the   user   can   select   several   independent   paragraphs   of   text   in   a   long   document.   It   might   be  
useful  but  it  is  not  controllable.  Scrolling  causes  the  program,  not  the  concrete  selection,  but  most  
programs  with  concrete-­‐selectable  data  are  scrollable.  

If   mutual   exclusion   is   turned   off   in   discrete   selection,   there   will   be   a   simple   case   where   many  
independent   objects   can   be   selected   merely   by   clicking   on   more   than   one   in   turn.   That's   called  
Additive  Selection.  A  listbox,  for  example  can  allow  the  user  to  make  many  selections  as  desired.  An  

sarojpandey.com.np     44  of  90  


Visual Programming Language and .NET BIT III

entry  is  then  de-­‐selected  by  clicking  it  a  second  time.  Once  the  user  has  selected  the  desired  objects,  
the  terminating  verb  acts  on  them  collectively.  

Most   discrete-­‐selection   systems   implement   mutual   exclusion   by   default   and   allow   additive  
selection   only   by   using   a   meta   key.   The   SHIFT   meta-­‐key   is   used   most   frequently   for   a   drawing  
program   where   the   user   selects   a   graphical   object   by   clicking   on   it   and   select   more   by   SHIFT-­‐
clicking.  

Concrete   selection   systems   should   never   allow   additive   selection   because   there   should   never   be  
more  than  a  single  selection  in  a  concrete  system.  However,  concrete-­‐selection  systems  do  need  to  
enable  their  single  allowable  selection  to  be  extended  and  again,  meta-­‐keys  are  used.  Unfortunately,  
there  is  little  consensus  regarding  whether  it  should  be  the  CTRL  or  the  SHIFT  key  that  performs  
that  role.  In  Word,  the  SHIFT  key  causes  everything  between  the  initial  selection  and  SHIFTED-­‐click  
to   be   selected.   It   is   easy   to   find   programs   with   similar   actions.   There   is   little   practical   difference  
between  choices,  so  this  is  an  area  where  following  the  market  leader  is  best  because  it  offers  the  
user  the  small  but  real  advantage  of  consistency.  

Group  Selection  

The   click-­‐and-­‐drag   operation   is   also   the   basis   for   group   selection.   In   a   matrix   of   text   or   cells,   it  
means  “extend  the  selection”  from  the  mouse-­‐down  point  to  the  mouse-­‐up  point.  This  can  also  be  
modified  with  meta  keys.  In  word,  for  example  CTRL-­‐click  selects  a  complete  sentences,  so  CTRL-­‐
drag   extends   the   selection   with   as   many   of   these   variants   as   possible.   Experienced   users   will  
eventually  come  to  memorize  and  use  them,  as  long  as  the  variants  are  manually  simple.  

In   a   collection   of   discrete   objects,   the   click-­‐and-­‐drag   operation   generally   begins   a   drag-­‐and-­‐drop  


move.  If  the  mouse  button  is  pressed  in  the  open  area  between  objects,  rather  than  on  any  specific  
object,  however,  it  has  a  special  meaning.  It  creates  a  dragrect.  

A  dragrect  is  a  dynamic  gray  rectangle  whose  upper  left  corner  is  the  mouse-­‐down  point  and  whose  
lower  right  corner  is  the  mouse-­‐up  point.  When  the  mouse  button  is  released,  any  and  all  objects  
enclosed  within  the  dragrect  are  selected  as  a  group.  

Visual  indication  of  selection  

The   program   should   visually   indicate   to   the   user   when   something   is   selected.   The   selected   state  
must  be  easy  to  spot  on  a  crowed  screen,  unambiguous  and  must  not  obscure  the  object  or  what  it  
is.  

sarojpandey.com.np     45  of  90  


Visual Programming Language and .NET BIT III

Make  selection  visually  bold  and  unambiguous.  

If  there  are  only  two  selectable  objects  on  the  screen,  the  developer  must  be  careful  about  how  to  
indicate  the  selection.  Anyone  who  uses  need  to  easily  find  which  one  is  selected  and  which  is  not.  
It’s  not  good  enough  just  to  be  able  to  see  they  are  different.  In  windows  it’s  harder  to  pull  a  stunt  
like   that.   Users   can   also   be   color   blind,   so   color   alone   can’t   be   the   factor   to   distinguish   between  
selections.  

Traditionally,  selection  is  accomplished  by  inversion   –  by  inverting  the  pixels  of  the  selected  object.  
On   a   monochrome   screen   this   means   turning   all   the   white   pixels   to   black   and   all   black   pixels   to  
white.   Inversion   was   accomplished   by   the   expedient   of   exclusive-­‐ORing   the   pixels   of   the   selected  
object  with  all  1  bits  or  all  0  bits  (depending  upon  the  processor).  The  XOR  happens  to  be  one  of  the  
fastest   operations   a   CPU   can   execute.   XOR   is   not   naturally   fast   but,   by   a   curious   twist   of   digital  
circuitry,   the   action   of   an   XOR   can   be   undone   simply   by   repeating   the   identical   XOR.   Microsoft  
continued  this  technique  in  the  first  releases  of  windows.  

The   result   of   the   XOR   is   only   defined   when   its   operands   are   binary;   one   or   zero,   white   pixels   or  
black   pixels.   Color,   however   is   represented   by   more   than   a   single   bit.   A   256-­‐color   screen   uses   eight  
bits.  When  the  XOR  is  used  on  these  more  complex  numbers,  the  individual  bits  invert  reliably,  but  
the  problem  arises  when  the  new  value  is  sent  to  the  physical  video  screen.  Different  video  drivers  
can  interpret  those  bits  in  different  ways.  The  number  may  be  split  into  smaller  pieces  to  control  
individual  red,  green  or  blue  bits.  Although  the  XOR  operation  will  be  consistently  represented  on  a  
computer,   it   may   well   be   represented   completely   differently   in   another   computer.   XOR   technique  
works  but  colors  are  we  get  are  defined  only  by  accidents  of  hardware,  not  by  any  standard.  

What  is  the  inverse  of  blue?  It’s  Yellow  in  art  class  but  in  Boolean  algebra,  its  not  known.  

Word  processors  and  spreadsheets  almost  always  show  black  text  on  a  white  background,  so  it  is  
reasonable   to   use   the   XOR   inversion   to   show   the   selection.   And,   when   colors   are   used,   inversion  
still  works  but  result  may  be  different  in  different  system.  

Microsoft   acknowledged   this   problem   by   defining   two   new   system   color   settings:  
COLOR_HIGHLIGHT   and   COLOR_HIGHLIGHTTEXT.   Those   2   constants   represent   variable   colors  
rather  than  a  fixed  color  and  each  user  can  change  that  color.  Then  that  color  remains  constant  for  
all   of   their   applications.   When   an   object   is   selected,   its   color   changes   to   whatever   color   is  
represented   by   COLOR_HIGHLIGHT.   Any   text   or   other   contrasting   pixels   within   a   selected   object  
change  to  whatever  color  is  represented  by  COLOR_HIGHLIGHTTEXT.  If  the  selection  is  concrete,  as  

sarojpandey.com.np     46  of  90  


Visual Programming Language and .NET BIT III

in   word   processor   the   background   color   becomes   COLOR_HIGHTLIGHT   and   foreground   text  
becomes   COLOR_HIGHLIGHTTEXT.   This   new   standard   normalizes   the   visual   behavior   of   selection  
on  a  color  platform  

Use  COLOR_HIGHLIGHT  and  COLOR_HIGHLIGHTTEXT  to  show  selection.  

Selecting  multi-­‐color  objects  

In  drawing,  painting,  animation  and  presentation  programs,  where  we  need  to  deal  with  multi-­‐color  
objects,  the  only  decent  solution  is  to  add  selection  indicators  to  the  image,  rather  than  by  changing  
the   selected   image’s   color,   whether   by   inversion   or   COLORHIGHLIGHT.   Inversion   can   opaque  
details   like   associated   text,   while   using   the   single   system   color   forces   the   program   to   reduce   the  
selected  images  to  two  colors,  foreground  and  background.  

In   a   rich   colored   environment   the   selection   can   get   visually   lost.   The   solution   is   to   instead   highlight  
the  selection  with  an  additional  graphics  that  shows  its  outline.  

One   of   the   first   programs   of   Macintosh,   Macpaint   had   a   wonderful   idiom   where   a   selected   object  
was  outlined  with  a  simple  dashed  line,  except  that  the  dashes  all  moved  in  synchrony  around  the  
object.  The  dashes  looked  like  ants  in  a  column;  thus,  it  earned  the  beautiful  nickname   marching  
ants.  

That   dashed   animation   is   not   hard   to   do,   although   it   takes   some   care   to   get   it   right.   It   works  
regardless  of  the  color  mix  and  intensity  of  the  background.  Adobe’s  Photoshop  uses  this  idiom  to  
show  selected  regions  of  photographs  and  it  works  very  well.  

 Gizmo  Manipulation  

 Direct-­‐manipulation  

Ben  Shneiderman  coined  the  term  direct-­‐manipulation.  He  stated  3  points  regarding  that:  

ñ Visual  representation  of  the  manipulated  objects.  

ñ Physical  actions  instead  of  text  entry.  

ñ Immediately  visible  impact  of  the  operation  

sarojpandey.com.np     47  of  90  


Visual Programming Language and .NET BIT III

Direct-­‐manipulation  is  clicking-­‐and-­‐dragging   things,  and  although  this  is   true,  it  can  easily   miss   the  
point   of   Shneiderman,   In   above   mentioned   3   points   two   of   them   concern   the   visual   feedback   the  
program   offers   the   user,   and   only   the   second   point   concerns   the   user's   actions.   It   might   be   more  
accurate   to   call   it   “Visual   Manipulation”   rather   than   “Direct   Manipulation”.   Unfortunately,   many  
instances   of   direct-­‐manipulation   idioms   are   implemented   without   adequate   visual   feedback   and  
these  fail  to  satisfy  the  definition  of  effective  direct-­‐manipulation.  

We  can  only  manipulate  information  that  is  already  displayed  by  the  program;  it  must  be  visible  for  
us   to   manipulate   it,   which   emphasizes   the   visual   nature   of   direct-­‐manipulation.   If   we   need   to  
develop   direct   manipulation   idioms   in   the   application,   we   must   take   care   to   render   data,   objects,  
gizmos  and  cursors  with  good  graphical  detail  and  richness.  

Direct-­‐manipulation   is   simple   straightforward,   easy   to   use   and   easy   to   remember.   Unfortunately,  


when  users  are  first  exposed  to  a  given  direct-­‐manipulation  idiom,  they  generally  cannot  intuit  it.  
Direct-­‐manipulation   should   be   taught   but   teaching   of   it   is   trivial   –   usually   consisting   of     merely  
pointing   it   out   and   once   taught,   is   never   forgotten.   Adding   metaphoric   images   may   help   but   finding  
appropriate  icon  is  hard  process.    

'Direct  Manipulation:  users  want  to  feel  that  they  are  in  charge  of  computer's  activities.'  
                      -­‐  Apple  

Apple  believes  in  direct-­‐manipulation  as  fundamental  tenet  of  good  user  interface  design.    

“Direct  manipulation,  first  person  systems  have  their  drawbacks.  Although  they  are  often  easy  to  
use,  fun  and  entertaining,  it  is  often  difficult  to  do  a  really  good  job  with  them.  They  require  the  user  
to  do  the  task  directly,  and  the  user  may  not  be  very  good  at  it.”  

              -­‐  Cognitive  psychology  guru,  Don  Norman    

Which  should  we  believe?  Apple  or  Norman?  

The   answer   is   both.   As   apple   says,   direct-­‐manipulation   is   an   extremely   powerful   tool,   and   as  
Norman  says,  the  tool  must  be  put  into  the  hands  of  someone  qualified  to  use  it.  

This   contradiction   should   illustrate   the   difference   between   the   various   direct   manipulation   types.  
Pushing   a   button   is   direct-­‐manipulation,   and   so   is   drawing   with   the   pen   tool   in   a   paint   program.  
Any   normal   user   can   push   a   button   but   few   are   capable   to   perform   the   drawing.   These   are   2  
variants   of   direct-­‐manipulation:   management   and   content.   Management   includes   gizmo-­‐
manipulation   like   button   pushing,   scrolling   and   is   generally   accessible   to   all   users.   Content   is  

sarojpandey.com.np     48  of  90  


Visual Programming Language and .NET BIT III

drawing  and  although  it  can  be  performed  by  anyone,  its  results  will  always  be  commensurate  with  
the  artistic  talent  of  the  manipulator.  

All  text  and  image  manipulations  in  programs  such  as  Corel  Draw!,  Adobe  Photoshop,  Ms  Paint  are  
drawing   operations.   Programs   like   Flowcharter,   Visio   strain   the   definition,   but   even   their   more  
structured  interfaces  are  still  content-­‐centered  and  require  some  graphic  talent  from  the  user.  

In  the  management  category,  there  are  five  varieties  of  direct-­‐manipulation:  

1. Making  Selections  

2. Dragging-­‐and-­‐dropping  

3. Manipulating  gizmos  

4. Resizing,  Reshaping  and  Repositioning  

5. Arrowing  

 
Manipulating  Gizmos  

The   mouse   action   required   for   the   direct-­‐manipulation   can   be   further   divided:   Clicking   or   Clicking-­‐

and-­‐dragging.  

Most  gizmos  –  like  buttcons,  push  buttons,  checkboxes  and  radio  buttons  merely  require  the  user  to  

move  the  cursor  over  them  and  click  the  mouse  button  once.  In  terms  of  gizmo  variants,  there  are  

minorities;   but   in   terms   of   the   number   of   actions   a   user   will   take   in   the   average   execution   of   a  

typical  application,  single  clicking  on  a  buttcon  and  push  buttons  is  likely  to  be  a  majority.  

Single   button   click   operations   are   the   simplest   of   direct   manipulation   idioms   and   the   ones   that  

work   best   with   gizmos   that   specify   operations   immediately.   Naturally,   these   functions   are   the   ones  

that  fall  into  the  user's  working  set  and  will  be  involved  most  frequently.  

Beyond  these  simple  gizmos,  most  direct-­‐manipulation  idioms  demand  a  click-­‐and-­‐drag  operation.  

This  is  a  fundamental  building  block  of  visual  interaction.  

sarojpandey.com.np     49  of  90  


Visual Programming Language and .NET BIT III

Anatomy  of  Drag  

A   drag   begins   when   the   user   presses   the   mouse   button   and   then   moves   it   without   releasing   the  
button.  The  set  of  cursor  screen  coordinates  when  the  user  first  presses  the  mouse  button  is  called  
the  mouse-­‐down  point  and  that  when  the  user  releases  the  button  called  the  mouse-­‐up  point.  The  
mouse-­‐down  point  only  becomes  known  at  the  end  of  the  process.  

Once  a  drag  begins,  the  entire  interaction  between  the  user  and  the  computer  enters  a  special  state,  
which  is  called  capture.  

In  programmer's  jargon,  all  interaction  between  the  system  and  the  user  is  captured,  means  that  no  
other   program   can   interact   with   the   user   until   the   drag   is   completed.   Any   actions   the   user   might  
take  with  the  mouse  or  keyboard  or  any  other  input  device  go  directly  to  the  program  –  technically,  
the   window   –   in   which   the   mouse   button   first   went   down.   That   window   which   owns   the   mouse-­‐
down   is   called   master   object.   If   this   master   object   is   concrete   data   or   gizmo,   the   drag   will   likely  
indicate   a   selection   extension   or   a   gizmo   state   change.   However,   if   the   master   object   is   a   discrete  
object,  it  more  likely  indicates  the  beginning  of  a  direct-­‐manipulation  operation  like  drag-­‐and-­‐drop,  
and  capture  will  play  an  important  role.  

Technically,  a  state  of  capture  exists  the  instant  the  user  presses  the  mouse  button,  and  it  doesn't  
end  until  that  mouse  button  is  released,  regardless  of  the  distance  the  mouse  moves  between  the  
two   button   actions.   To   the   human,   a   simple   click-­‐and-­‐release   without   motion   seems   instantaneous,  
but  to  the  program,  hundreds  of  thousands  of  instructions  can  be  executed  in  the  time  it  takes  to  
press  and  release  the  button.  If  the  user  inadvertently  moves  the  mouse  before  releasing  the  button,  
capture  protects  him  from  wildly  triggering  adjacent  controls.  The  master  object  will  simply  reject  
such  spurious  commands.  

Escaping  from  capture  

One   of   the   most   important   –   yet   most   frequently   ignored   –   parts   of   a   drag   is   a   mechanism   for  
getting  out  of  it.  The  user  not  only  needs  a  way  to  abort  the  drag,  if  he  does,  he  needs  to  have  solid  
assurance  that  he  did  it  successfully.  

The   ESCAPE   key   on   the   keyboard   should   always   be   recognized   as   a   general   purpose   cancel  
mechanism  for  any  mouse  operation,  either  clicking  or  dragging.  If  the  user  presses  the  ESCAPE  key  
while  holding  down  the  mouse  button,  the  system  should  abandon  the  state  of  capture  and  return  
the  system  to  the  state  it  was  in  before  the  mouse  button  was  pressed.  When  the  user  subsequently  

sarojpandey.com.np     50  of  90  


Visual Programming Language and .NET BIT III

releases  the  mouse  button,  the  program  must  remember  to  discard  that  mouse-­‐up  input  before  it  
has  any  side  effect.  

The  Meta  keys  are  often  the  only  keys  that  have  any  meaning  during  drags,  we  could  actually  use  
any   non-­‐meta-­‐keystroke   to   cancel   a   mouse   stroke,   rather   that   offering   up   only   the   ESCAPE.  
However  some  programs  allows  the  use  of  arrows  keys  in  conjunction  with  the  mouse.  

Cooper’s  Favorite  

Coopers   favorite   way   to   cancel   idiom   is   the   chord   click,   where   user   presses   both   mouse   buttons  
simultaneously.  Typically,  the  user  likely  begins  a  drag  with  left  mouse  button,  and  then  discovers  
that  he  doesn’t  really  want  to  finish  what  he  has  begun.  He  presses  right  moue  button,  then  safely  
releases  both.  This  idiom  is  insensitive  to  the  timing  or  sequence  of  the  release,  and  works  equally  
well  if  the  drag  was  begun  with  the  right  mouse  button.  

Informing  the  user  

If  the  program  is  well  designed  and  enables  the  user  to  cancel  out  of  drag  operation  with  escape  key  
or  chord-­‐click,  the  problem  is  not  yet  over.  It  should  also  assure  the  user  that  he  is  now  safe.  

The   cursor   may   have   been   changed   to   indicate   that   a   drag   was   in   a   progress,   or   an   outline   of   the  
dragged  object  may  have  been  moving  with  a  cursor.  The  cancellation  makes  these  visual  hints  go  
away,  but  the  user  may  still  wonder  if  he  is  truly  safe.  A  user  may  have  pressed  the  ESCAPE  key,  but  
is  still  holding  the  mouse  button  down,  unsure  whether  it  is  entirely  safe  to  let  go  of  it.  It  is  cruel  
and   unusual   punishment   to   leave   in   that   state.   It   is   imperative   that   he   be   informed   that   the  
operation   has   been   effectively   canceled   and   that   releasing   the   mouse   button   is   OK.   It   can’t   hurt   and  
can  only  help  –  to  make  sure  that  he  gets  a  reassuring  message.  The  message  should  clearly  state  
that  the  drag  is  harmlessly  over.  

If  the  mouse  goes  down  inside  a  gizmo,  the  gizmo  must  virtually  show  that  it  is  poised  to  undergo  a  
state   change.   This   action   is   important   and   is   often   neglected   by   those   who   create   their   own   gizmos.  
It  is  a  form  of  active  visual  hinting  and  that  is  called  the  Pliant  Response.  

A  push  button  needs  to  change  from  a  visually  outdented  state  to  a  visually  indented  state;  a  check  
should   highlight   its   box   not   show   a   check   just   yet.   The   pliant   response   is   an   important   feedback  
mechanism  for  any  gizmos  that  either  invokes  an  action  or  changes  its  state,  letting  the  user  know  
that   some   action   is   forthcoming   if   he   releases   the   mouse   button.   The   pliant   response   is   also  

sarojpandey.com.np     51  of  90  


Visual Programming Language and .NET BIT III

important  part  of  cancel  mechanism.  When  the  user  clicks  down  on  a  button,  that  button  responds  
by   indenting.   If   the   user   moves   the   mouse   away   from   that   button   while   still   holding   the   button  
down,   the   button   should   return   to   its   quiescent,   out-­‐dented   state.   If   the   user   then   releases   the  
mouse,  the  button  will  not  be  activated.  

Dragging  Gizmos  

Many  gizmos,  particularly  menus,  require  the  moderately  difficult  motion  of  a  click-­‐and-­‐drag  rather  
than  a  mere  click.  This  direct  manipulation  operation  is  more  demanding  of  the  user  because  of  its  
juxtaposition  of  fine  motions  with  gross  motions  to  click,  drag  and  then  release  the  mouse  button.  
Although   menus   are   not   used   frequently   as   toolbar   gizmos,   they   are   still   used   very   often  
particularly   by   new   or   infrequent   user.   Menu   is   primary   gizmo   for   beginners   and   it   is   one   of   the  
most  difficult  gizmo  to  physically  operate.  

One  of  the  nice  features  of  Windows  3.x  is  the  ability  to  work  its  menus  with  a  series  of  single  clicks  

rather  than  clicking-­‐and-­‐dragging.  When  menu  is  clicked,  it  drops  down.  Then  desired  menu  item  is  

clicked   and   menu   will   be   closed.   Apple   hasn’t   included   this   idiom   in   their   interface.   In   Windows   95,  

Microsoft   has   extended   the   idea   by   putting   the   program   into   a   sort   of   “menu   mode”.   When   in   menu  

mode,   all   of   the   top-­‐level   menus   in   the   program   and   all   of   the   items   on   that   menu   are   active.   As   the  

mouse  is  moved  around,  each  menu  drops  down  without  having  to  use  the  mouse  button  at  all.  This  

may  be  confusing  for  first  time  users  but  after  some  use,  they  will  feel  the  pleasant.  

There  are  other  types  of  click-­‐and-­‐drag  gizmos.  Cascading  menu  is  another  variant.  

In  a  cascading  menu,  a  menu  is  pulled  downed  in  normal  way,  and  then  secondary  menu  item  will  
be  launched  from  an  item  on  the  first  menu  by  dragging  the  mouse  to  the  right.  Cascading  menus  
can  be  stacked  so  there  can  be  more  than  one  item.  They  form  a  hierarchy  of  menus.  

Cascading   menus   demand   a   fair   amount   of   skill   by   the   mouse   user,   because   any   false   move   that  

causes  the  cursor  to  detour  outside  of  the  enclosing  menu  rectangle  will  cause  one  or  another  of  the  

menus   to   disappear.   Cascades   can   be   a   frustrating   gizmo   to   manipulate,   and   although   they   have  

their   place   in   interface   design   so   use   of   menu   for   frequently   used   functions   is   not   recommended.  

Microsoft  Windows  95  makes  extensive  use  of  cascading  menus.  

sarojpandey.com.np     52  of  90  


Visual Programming Language and .NET BIT III

Repositioning    

Gizmos  that  depend  on  click-­‐and-­‐drag  motions  include  icons  and  the  various  repositioning,  resizing  
and  reshaping  idioms.  Repositioning  is  the  simple  act  of  clicking  on  an  object  and  dragging  it  to  
another  location.  

The   most   significant   design   issue   regarding   repositioning   is   that   it   usurps   the   place   of   other   direct-­‐
manipulation   idioms.   Repositioning   is   a   form   of   direct-­‐manipulation   that   takes   place   on   a   higher  
conceptual   level   than   that   occupied   by   the   object   being   repositioned.     Reposition   doesn't   means  
manipulating  the  aspect  of  object,  it  simply  means  manipulating  the  placement  of  the  object.  This  
action   consumes   the   click-­‐and-­‐drag   action,   making   it   unavailable   for   other   purposes.   If   the   object   is  
repositionable,  the  meaning  of  click-­‐and-­‐drag  is  taken  and  cannot  be  devoted  to  some  other  action  
within  the  object  itself,  like  a  button  press.  

The  most  general  solution  to  this  conflict  is  to  dedicate  a  specific  physical  area  of  the  object  to  the  
repositioning  function.  A  window  in  Windows  or  on  the  Mac  can  be  repositioned  by  clicking-­‐and-­‐
dragging  its  caption  bar.  The  rest  of  the  window  is  not  pliant  for  repositioning,  so  the  click-­‐and-­‐drag  
idiom  is  available  for  more  application  specific  functions.  The  only  hint  of  the  window's  draggability  
is  the  color  of  the  caption  bar,  a  subtle  visual  hint  that  is  purely  idiomatic:  there  is  no  way  to  intuit  
the   presence   of   the   idiom.   But   the   idiom   is   very   effective,   and   it   merely   proves   the   efficacy   of  
idiomatic  interface  design.  

The  caption  bar  does  the  double  duty  as  a  program  identifier,  active  status  indicator  and  
repository  for  certain  other  standard  controls.  

To   move   an   object,   it   must   first   be   selected.   This   is   why   selection   must   take   place   on   the   mouse-­‐
down  transition:  the  user  can  drag  without  having  to  first  click-­‐and-­‐release  on  an  object  to  select  it,  
then  click-­‐and-­‐drag  on  it  to  repo  it.  It  feels  so  much  more  natural  to  simply  click  it  and  then  drag  it  
to  where  required.  It  is  similar  to  moving  a  book  from  one  place  to  another.  In  word,  Microsoft  has  
given  the  clumsy  click-­‐wait-­‐click  operation  to  drag  chunk  of  text.  The  user  must  click-­‐and-­‐drag  to  
select   a   section   of   text,   then   wait   a   second   or   so   and   click-­‐and-­‐drag   again   to   move   it   to   another  
location.  

Resizing  and  Reshaping  

When  referring  to  the  “Desktop”  of  windows  and  other  similar  GUIs,  there  isn't  really  any  functional  
difference  between  resizing  and  reshaping.  The  user  adjusts  a  rectangular  window's  size  and  aspect  
ratio   at   the   same   time   and   with   the   same   control   by   clicking-­‐and-­‐dragging   in   a   dedicated   gizmo.   On  

sarojpandey.com.np     53  of  90  


Visual Programming Language and .NET BIT III

the   Mac,   there   is   a   special   resizing   control   on   each   window   in   the   lower   right   corner,   frequently  
nestled   into   the   space   between   the   application's   vertical   and   horizontal   scrollbars.   Dragging   this  
control  allows  the  user  to  change  both  the  height  and  width  of  the  window.  

Windows   3.x   avoided   this   idiom   in   favor   of   the   thickframe   surrounding   each   window.   The  

thickframe   is   an   excellent   solution,   which   offers   both   generous   visual   hinting   as   well   as   cursor  

hinting   so   it   is   easily   discovered.   Its   shortcoming   is   the   amount   of   real   estate   it   consumes.   It   may  

only   be   four   or   five   pixels   wide   but   while   calculating   the   total   with   four   sides   of   window,   this  

mechanism  seems  expensive.  

Windows   95   institutes   a   new   reshaping-­‐resizing   gizmo   that   is   remarkably   like   Mac's   lower   right  

corner  reshaper/resizer.  This  gizmo  is  a  little  triangle  with  45  degree,  3D  ribbing.  This  new  gizmo  

combination   of   shaper   and   triangle   so   it   can   be   named   'Shangle'.   The   shangle   occupies   a   square  

space  on  the  window,  but  most  of  the  windows  have  a  status  bar  across  their  bottom,  so  the  shangle  

can   be   kept   at   the   end   of   the   status   bar.   Windows   95   also   retains   the   thickframe   and   its   cursor  

hinting   but   it   has   no   visual   hinting.   The   user   interface   designers   are   Mac   influenced   so   shangle   is  

gaining  the  popularity.  

Thickframes   and   shangles   are   fine   for   reshaping   windows,   but   when   the   object   to   be   resized   is   a  

graphical   element   in   a   painting   or   drawing   program,   it   is   not   acceptable   to   permanently  

superimpose   controls   onto   it.   A   resizing   idiom   for   graphical   drawing,   especially   the   object   it  

controls  and  it  must  me  respectful  of  the  user's  view  of  the  object  and  its  space.  The  resizer  must  

not  obscure  the  resizing  action.  

There   is   a   popular   idiom   that   accomplishes   these   goals.   It   consists   of   eight   little   black   squares  

positioned   one   at   each   corner   of   a   rectangular   object   and   one   centered   on   each   side.   Those   little  

black  squares  are  called  'handles'  but  that  word  is  overbooked  in  programming  world  so  that  idiom  

can  be  called  'grapples'.  

sarojpandey.com.np     54  of  90  


Visual Programming Language and .NET BIT III

Grapples   are   a   boon   to   designers   because   they   can   also   indicate   selection.   This   is   a   naturally  
symbiotic  relationship,  as  an  object  must  usually  be  selected  to  be  resizable.  

Grapple  centered  on  each  side  moves  only  that  side,  while  the  other  sides  remains  motionless.  The  
grapples  on  the  corners  simultaneously  move  both  of  the  sides  they  touch.    

Grapples   tend   to   obscure   the   object   they   represent,   so   they   don't   make   very   good   permanent  
controls.   This   is   why   we   don't   see   them   on   top-­‐level   resizable   windows.   For   that   situation,   the  
thickframe  or  Shangle  is  better  idiom.  If  the  selected  object  is  larger  than  the  screen,  the  grapples  
may   not   be   visible.   If   they   are   hidden   off   screen,   not   only   are   they   unavailable   for   direct  
manipulation,  but  they  are  useless  as  indicators  of  selection.  

In   the   windows   world,   things   that   are   rectangular   are   easy   for   programs   to   handle   and   non-­‐
rectangular  things  are  best  handled  by  enclosing  them  in  a  bounding  rectangle.  To  represent  more  
complex   objects   rather   than   rectangles,   there   is   a   very   powerful   and   useful   variant   of   grapple,  
which  is  called  a  vertex  grapple.  

Many   programs   draw   objects   on   the   screen   with   polylines.   A   polyline   is   a   graphic   programmer's  
term  for  a  multi-­‐segment  line  defined  by  an  array  of  vertices.  If  the  last  vertex  is  identical  to  the  first  
vertex,  it  is  a  closed  form  and  the  polyline  is  a  polygon.  When  the  object  is  selected,  the  program,  
rather   than   placing   eight   grapples   as   it   does   on   a   rectangle,   places   one   grapple   on   top   of   every  
vertex   of   the   polyline.   The   user   can   then   drag   any   vertex   of   the   polyline   independently   and   actually  
change  one  small  aspect  of  the  object's  internal  shape  rather  than  affecting  is  as  a  whole.  

sarojpandey.com.np     55  of  90  


Visual Programming Language and .NET BIT III

Many  objects  in  PowerPoint,  including  polygons,  are  rendered  with  polylines.  If  polygon  is  clicked,  
it  is  given  a  bounding  rectangle  with  the  standard  eight  grapples.  If  polygon  is  double  clicked,  the  
bounding  rectangle  disappears  and  vertex  grapples  appear  instead.    

Resizing  and  reshaping  meta-­‐key  variants  

In  the  context  of  dragging,  a  meta-­‐key  is  often  used  to  constrain  the  drag  to  an  orthogonal  direction.  

This  type  of  drag  is  called  a  constrained  drag.  

A   constrained   drag   is   one   that   stays   on   a   90   degree   or   45   degree   axis   regardless   of   how   the   user  

might   veer   off   a   straight   line   with   the   mouse.   Usually,   the   SHIFT   meta-­‐key   is   used,   but   this  

convention   varies   from   program   to   program.   Constrained   drags   are   extremely   helpful   in   drawing  

programs,   particularly   when   drawing   business   graphics,   which   are   generally   neat   diagrams.   The  

angle  of  the  drag  is  determined  by  the  predominant  motion  of  the  first  few  millimeters  of  the  drag.    

If   the   user   begins   dragging   on   a   predominantly   horizontal   axis,   for   example,   the   drag   will  

henceforth  be  constrained  to  the  horizontal  axis.  Some  programs  interpret  constraints  differently,  

letting  the  user  shift  axes  in  mid-­‐drag  by  dragging  the  mouse  across  a  threshold.  Either  way  is  fine.  

sarojpandey.com.np     56  of  90  


Visual Programming Language and .NET BIT III

When  a  drag  is  constrained,  usually  by  holding  down  the  SHIFT  key,  the  object  is  only  dragged  along  one  of  the  
four  axes  as  in  figure.  The  program  selects  which  one  by  the  direction  of  the  initial  movement  of  the  mouse.  
 

The   paint   program   that   comes   with   Windows   95   doesn't   constrain   drags   when   moving   an   object  
around,   but   it   does   constrain   the   drawing   of   a   few   shapes,   like   lines   and   circles.   Most   programs  
which   have   drawing   features   (Like   MS   PowerPoint)   treats   graphics   as   object   instead   of   bits   (Like  
MS  Paint)  allow  constrained  drags.  

Arrowing  

A  direct-­‐manipulation  idiom  that  can  be  very  powerful  in  some  applications  is  arrowing,  in  which  
the  user  clicks-­‐and-­‐drags  from  one  object  to  another,  but  instead  of  dragging  the  first  object  onto  
the  second,  an  arrow  is  drawn  from  the  first  object  to  the  second  one.  This  type  of  idiom  is  mostly  
seen   in   project   management   or   organizational   chart   programs.   For   example,   to   connect   one   task  
box  in  a  project  manager's  network  diagram  (PERT  Chart)  with  another,  the  user  clicks  and  drags  
an   arrow   between   them.   The   direction   of   the   arrowing   is   significant:   the   task   where   the   mouse  
button  went  down  is  the  'from'  task,  and  where  the  mouse  button  is  released  is  the  “to”  task.  

The  visual  arrows  generally  behave  in  a  manner  best  described  as  rubber  branding.  

Rubber   branding   is   where   the   arrow   forms   a   line   that   extends   from   the   exact   mouse-­‐down   point   to  
the  current  cursor  position.  The  line  is  animated,  so  as  the  user  moves  the  cursor,  the  position  of  

sarojpandey.com.np     57  of  90  


Visual Programming Language and .NET BIT III

the   cursor-­‐end   of   the   line   is   constantly   pivoting   on   the   anchored   end   of   the   line.   Once   the   user  
releases   the   mouse   button,   the   mouse-­‐up   point   is   known,   and   the   program   can   decide   whether   it  
was   within   a   valid   target   location.   If   so,   the   program   draws   a   more   permanent   visual   arrow  
between  the  two  objects.  Generally,  it  also  links  them  logically.  

As   the   user   drags   the   end   of   the   arrow   around   the   screen,   input   us   captured,   and   the   rules   of  
dragging  in  discrete  data  apply.  

The   arrowing   function   can't   normally   e   triggered   by   the   left   button   because   it   would   collide   with  
selection  and  repositioning.  In  some  programs,  it  is  triggered  by  the  right  button,  but  windows  95  
makes  that  problematic  with  its  usurpation  of  the  right  click  for  the  context  menu.  

Arrowing   doesn't   require   cursor   hinting   as   much   as   other   idioms   because   the   rubber-­‐branding  
effect  is  so  clearly  visible.  However,  it  would  be  a  big  help,  in  programs  where  objects  are  connected  
logically,   to   show   which   objects   currently   pointed-­‐to   are   valid   targets   for   the   arrow.   In   other   words,  
if  the  user  drags  an  arrow  until  it  points  to  some  icon  or  widget  on  the  screen,  how  can  he  tell  if  that  
icon  or  widget  can  legally  be  allowed  to?  The  answer  is  to  have  the  potential  target  object  engage  in  
some  active  visual  hinting.  

Direct-­‐manipulation  visual  feedback  

The  key  to  successful  direct  manipulation  is  rich  visual  feedback.    

Direct  manipulation  process  into  three  distinct  phases:  

1. Free  Phase:  Before  the  user  takes  any  action.  

2. Captive  Phase:  Once  the  user  has  begun  the  drag.  

3. Terminating  Phase:  After  the  user  releases  the  mouse  button.  

In  the  Free  Phase,  our  job  is  to  indicate  direct-­‐manipulation  pliancy.  

In  the  Captive  Phase,  we  have  two  tasks.  We  must  positively  indicate  that  the  direct-­‐manipulation  
process  has  begun,  and  we  must  visually  identify  the  potential  participants  in  the  action.  

In  the  Termination  Phase,  we  must  plainly  indicate  to  the  user  that  the  action  has  terminated  and  
show  exactly  what  the  result  is.  

sarojpandey.com.np     58  of  90  


Visual Programming Language and .NET BIT III

Depending  on  which  direct  manipulation  phase  we  are  in,  there  are  two  variants  of  cursor  hinting.  

During  the  free  phase,  any  visual  change  the  cursor  makes  as  it  merely  passes  over  something  on  

the  screen  is  called  free  cursor  hinting.  Once  the  captive  phase  has  begun,  a  change  to  the  cursor  is  

called  captive  cursor  hinting.  

Microsoft  word  uses  the  clever  free  hint  or  reversing  the  angle  of  the  arrow  when  the  cursor  is  to  

the  left  of  text  to  indicate  that  selection  will  be  line-­‐by-­‐line  or  paragraph-­‐by-­‐paragraph  instead  of  

character-­‐by-­‐character   as   it   normally   is   within   the   text   itself.   Many   other   programs   use   a   hand-­‐

shaped  cursor  to  indicate  that  the  document  itself,  rather  than  the  information  in  it,  is  draggable.  

Microsoft  is  using  captive  cursor  hinting  more  and  more  as  they  discover  its  usefullness.  Dragging-­‐

and-­‐dropping   text   in   word   or   cells   in   excel   are   companied   by   cursor   changes   indicating   precisely  

what   the   action   is   and   whether   the   objects   are   being   moved   or   copied.   When   a   file   is   dragged   in  

Windows  95,  the  text  of  the  name  of  the  file  is  dragged  actually  from  one  place  to  another.  

When   something   is   dragged,   the   cursor   must   drag   either   the   thing   or   some   simulacrum   of   that  

thing.   In   a   drawing   program,   for   example,   when   any   complex   element   is   dragged   from   one   position  

to  another,  it  may  be  difficult  for  the  program  to  actually  drag  the  image,  so  it  often  just  drag  to  drag  

away   a   copy   of   the   object   instead   of   the   object   itself,   the   cursor   may   change   from   an   arrow   to   an  

arrow   with   a   little   plus   sign   over   it   to   indicate   that   the   operation   is   a   copy   rather   than   a   move.   This  

is  a  clear  example  of  captive  cursor  hinting.  

sarojpandey.com.np     59  of  90  


Visual Programming Language and .NET BIT III

Drag  and  Drop  

 
 
 
 
REFER  
 VPLN_ CCXLVII  
 

sarojpandey.com.np     60  of  90  


Visual Programming Language and .NET BIT III

5.  THE  CAST  

Windows,   menus,   dialogs   and   push   buttons   are   the   most   visible   trappings   of   the   modern   graphical   user  

interface,   but   they   are   effects,   rather   than   causes,   of   good   design.   They   serve   a   purpose   and   we   have   to  

understand  how  they  fit  into  designer's  toolbox.  We  must  understand  why  each  component  exists  and  what  

purpose  and  effects  they  each  have  before  we  can  profitably  fit  them  into  the  system  we  are  developing.  

Menu  Design  Issues  (Drop  Down  Menus,  Popup  Menus,  Hierarchy  of  Menu),  Menus  and  its  types  (Standard  

Menus,  Optional  Menus,  System  Menu,  Menu  Item  variation),  Dialog  Boxes  (Dialog  Box  Basics,  Suspension  of  

interaction,   Modal   and   Modeless   Dialog   Boxes,   Problem   in   Modeless   Dialog   Boxes,   Different   types   of   Dialog  

Boxes),  Dialog  Box  conventions  (Caption  Bar,  Attributes,  Termination  dialog  boxes,  Expanding  dialog  boxes,  

Cascading   dialog   boxes),   Toolbars   (Advantages   over   menus,   Momentary   button   and   latching   button,  

customizing  toolbars)  

The  Command  Line  

  User   should   know   the   commands   of   the   program   and   must   enter   the   command   for   the  
processing.  

The  Hierarchical  Menu  Interface  

− List  of  choices  for  the  user  so  that  they  can  read  the  list  and  select  an  item  from  it,  its  similar  
like  choosing  the  dish  by  reading  the  menu  in  a  restaurant.  

− This   enables   the   user   to   forget   many   of   the   commands   and   option   details   required   by   the  
command-­‐line  interface.  Users  don’t  need  to  remember  any  commands.  

− A  typical  menu  would  offer  half-­‐dozen  choices,  each  indicated  by  an  ordinal  from  1  to  6;  the  
user   would   enter   the   number   to   select   the   corresponding   option.   Once   the   user   made   his  
selection  it  was  set  in  concrete  –  there  was  no  going  back.  Users  made  lot  of  mistakes  with  
those   type   of   menu   Items   so   programmers   added   confirmation   menus   which   accepts   the  
input   as   before   but   issues   another   menu   saying   “Press   ESCAPE   to   change   the   selection   &  
ENTER  to  proceed.”  These  types  of  menus  are  not  user  friendly  because  only  one  menu  at  
time  could  be  placed  at  time.  

   

sarojpandey.com.np     61  of  90  


Visual Programming Language and .NET BIT III

Menu  

A  menu  is  a  list  of  items  that  specify  options  or  groups  of  options  (a  submenu)  for  an  application.  
Clicking  a  menu  item  opens  a  submenu  or  causes  the  application  to  carry  out  a  command.  

The   Menu   control   presents   a   list   of   items   that   specify   commands   or   options   for   an   application.  
Typically,  clicking  a  Menu-­‐Item  opens  a  submenu  or  causes  an  application  to  carry  out  a  command.    
 
The  POPUP  Menu  
A  PopUp  window  is  a  rectangle  on  the  screen  that  appears,  overlapping  and  obscuring  the  main  
part  of  the  screen,  until  it  has  completed  its  work,  whereupon  it  disappears  leaving  the  original  
screen  behind,  untouched.  A  PopUp  window  is  the  mechanism  used  to  implement  both  pull-­‐down  
and  dialog  boxes.  
In  windows  GUI  menus  are  visible  across  the  top  row  of  the  screen,  the  user  points  and  clicks  on  a  
menu,  and  its  direct  subordinate  list  of  options  immediately  appears  in  a  small  window  just  below  it  
which  is  called  PopUp  Menu.  

The  user  makes  a  single  choice  from  the  popup  menu  by  clicking  once  or  by  dragging  and  releasing  

Enough  choices  could  be  displayed  on  the  main  bar  to  organize  all  of  the  program's  functions  into  
required  number  of  groups  (5,  6  or  more...),  and  those  groups  are  give  a  name  which  is  the  menu  
title.  

Pedagogic  Vector  

Two  idioms  are  more  popular  in  modern  windows  GUI  which  are  direct-­‐manipulation  and  toolbars.  
The  direct  manipulation  idioms  are  slower  but  the  toolbars  brought  drastic  change  in  windows  GUI.  
Now-­‐a-­‐days  almost  all  windows  based  programs  have  toolbar  covered  with  buttcons.  

Each  distinct  technique  for  issuing  instructions  to  the  program  is  called  command  vector.  Menus  
are   good   command   vector,   as   are   direct   manipulation   and   toolbar   buttcons.   Good   user   interface  
provides   multiple   command   vectors   where   each   function   in   the   program   has   menu   commands,  
toolbar   commands,   keyboard   commands,   and   direct-­‐manipulation   commands,   each   with   parallel  
ability  to  invoke  a  given  command.  

Direct-­‐manipulation  and  toolbar-­‐buttcon  command  vectors  have  the  property  of  being  immediate  
vector.   There   is   no   delay   between   pressing   a   buttcon   and   seeing   the   results.   Direct-­‐manipulation  

sarojpandey.com.np     62  of  90  


Visual Programming Language and .NET BIT III

also  has  an  immediate  effect  on  the  information  without  any  intermediary.  Neither  menu  nor  dialog  
boxes  have  this  immediate  property.  Each  one  required  immediate  step  sometimes  more  than  1.  

The  buttcons  and  other  gizmos  on  the  toolbar  are  usually  redundant  with  respect  to  commands  in  
the  menu.  Buttcons  are  immediate,  while  menu  commands  remain  relatively  slow.  Menu  commands  
have   great   advantages,   however   in   their   English   descriptions   of   the   functions,   and   the   detailed  
controls   and   data   that   appears   on   corresponding   dialog   boxes.   This   detailed   data   makes   the  
menu/dialog  command  vector  the  most  useful  for  study  so  it’s  called  Pedagogic  Vector.  

Menus  and  Dialogs  are  pedagogic  vector.  

The  pedagogic  vector  also  means  that  the  menu  items  must  be  complete,  offering  a  full  selection  of  
the   actions   and   facilities   available   in   the   program.   Every   dialog   box   in   the   program   should   be  
accessible  from  some  menu  option.  A  scan  of  the  menus  should  clear  the  scope  of  the  program  and  
the  depth  and  breadth  of  its  various  facilities.  

When  a  user  look  at  a  program  for  the  first  time,  it  is  often  difficult  for  them  to  size  up  what  that  
program  can  do.  An  excellent  way  to  get  an  impression  of  the  power  and  purpose  of  an  application  
is  to  glance  at  the  set  of  available  functions  by  the  way  of  its  menus  and  dialogs.  

Understanding  the  scope  of  what  a  program  can  and  can't  do  is  one  of  the  fundamental  aspects  of  
creating  an  atmosphere  conductive  to  learning.  Many  otherwise  easy-­‐to-­‐use  programs  put  the  user  
off   because   there   is   no   simple,   unthreatening   way   for   him   to   find   out   just   what   the   program   is  
capable  of  doing.  

The  toolbar  and  other  direct-­‐manipulation  idioms  can  be  too  inscrutable  for  the  first  time  user  to  
understand   or   to   even   fit   into   a   framework   of   possibilities   but   the   textual   nature   of   the   menus  
serves  to  explain  the  functions.  

Standard  Menus  

Most  GUI  these  days  has  at  least  a  'File'  and  'Edit'  menu  in  its  two  leftmost  and  a  'Help'  menu  all  the  
way   over   to   the   right.   The   windows   style   guide   states   that   these   File,   Edit   and   Help   menus   are  
standard.  

File   menu   is   named   after   an   accident   of   the   way   our   Operating   System   Work.   The   edit   menu   is  
based   on   the   very   weak   clipboard   and   help   menu   is   frequently   the   least   helpful   source   of   insight  
and  information  for  the  befuddled  user.  

sarojpandey.com.np     63  of  90  


Visual Programming Language and .NET BIT III

Standard  Menus  

ñ The  File  Menu  

ñ The  Edit  Menu  

ñ The  Windows  Menu  

ñ The  Help  Menu  

The  Correct  Menu  

There  is  no  defined  answer  for  which  type  of  menu  are  correct.  It  depends  upon  the  requirement  of  
the  program.    

Using   every   spatial   and   visual   hint,   we   should   arrange   the   menus   from   left   to   right   in   some  
meaningful  order.  We  could  put  Help  in  the  far  left  position  because  it  may  be  used  first.  But  it's  not  
good   because   that   menu   item   is   not   used   much   after   we   get   acquainted   with   the   program.   So  
putting  Help  in  the  far  right  position  is  better.  

A  reasonable  sequence  for  the  other  menus  would  be  to  order  them  according  to  their  scope:  the  
most  global  items  on  the  left,  getting  more  and  more  specific  as  we  move  to  the  right.  

The  Program  Menu  

The   properties   of   a   program   include   its   default   settings,   what   templates   are   available     and   what  
modes  it  is  in.  It  would  include  the  configuration  of  standard  interface  idioms  like  toolbars.  It  would  
also  include  personalization  on  graphics  and  colors.  

The  document  Menu  

In  this  menu  the  prime  menu-­‐item  includes  the  properties  of  the  recently  active  document.  This  will  
include  the  things  like  size,  type,  margin,  page  orientation  etc.  Different  document  views  like  print  
or   presentation   are   also   kept   here.   Functions   that   operate   the   document   like   calculation   on   the  
spreadsheet   and   formatting   the   text.   Access   to   the   outside   world   at   the   document   level   is   currently  
served  by  the  top  5  items  on  most  File  Menus.  The  outside  world  includes  the  printer,  fax,  email  etc.  

This  is  also  a  logical  place  to  maintain  the  most  recently  used  list  of  documents  which  is  normally  
seen  on  the  bottom  of  the  File  menu.  

With  these  all  items  the  document  menu  can  be  big,  so  it  can  be  broken  into  two  or  more  popups.  

sarojpandey.com.np     64  of  90  


Visual Programming Language and .NET BIT III

Pieces  of  the  document  

The  next  menu  to  the  right  would  cover  the  objects  embedded  in  the  document.  If  there  are  tables  
or   images   in   the   document,   this   is   the   place   to   keep   the   menu   item   that   controls   them.   These   menu  
items  are  selected  only  when  the  particular  object  is  selected  and  the  objects  don't  necessarily  have  
to   be   embedded   by   another   program.   In   a   drawing   program,   there   can   be   things   like   rectangles,  
ellipses  and  polylines.  A  word  processor  would  contain  paragraphs  of  text  and  the  headings  and  the  
controls  on  the  menu  can  also  contain  stylesheets  and  formatting  controls.  This  menu  also  covers  
other  properties  of  the  object  such  as  size,  color,  orientation.  Object  can  also  have  transformations  
like  formatting  &  rotation,  those  should  also  be  included.  

The  last  on  this  menu  would  be  the  ability  to  load  and  save  objects  from  other  documents  or  from  
the  disk.  

Optional  Menus  

The  View  Menu  

  The   view   menu   should   contain   all   options   that   influence   the   way   the   user   looks   at   the  
program's  data.  Additionally,  any  optional  visual  items  like  rulers,  templates  or  palettes  should  be  
controlled  here.  

The  Insert  Menu  

  The   insert   menu   is   extension   on   edit   menu.   If   program   needs   insertion   for   less   items   this  
can  be  included  to  Edit  Menu,  else  separate  Insert  Menu  is  created.  

The  Format  Menu  

  This   is   weakest   of   the   optional   menus   which   deals   almost   exclusively   with   properties   of  
visual   items   and   controlled   by   direct   manipulation   not   by   functions.   The   page   setup   which   is  
normally  kept  in  the  File  menu  can  be  kept  here.  

The  Tools  Menu  

The   tools   menu,   also   called   options   or   functions   contain   powerful   functions.   Functions   like   spell-­‐
checkers,  goal  finders  etc.  are  included  here.  The  items  of  tools  menu  is  also  called  Hard-­‐hat-­‐items.  
Hard-­‐hat-­‐items  are  the  functions  that  should  only  be  used  bye  real  power  users.  

sarojpandey.com.np     65  of  90  


Visual Programming Language and .NET BIT III

  For   example:   A   client/server   database   program   has   easy   to   use   direct   manipulation   idioms  
for   building   a   query,   while   behind   the   scenes   the   program   is   composing   the   appropriate   SQL  
statements   to   create   the   report.   Giving   power   users   a   way   way   to   edit   the   SQL   statement   directly   is  
most  definitely  a  hard-­‐hat  function!  

sarojpandey.com.np     66  of  90  


Visual Programming Language and .NET BIT III

System  Menu  

The   system   menu   is   the   standard   little   menu   available   in   the   upper   left-­‐hand   corner   of   all  
independent   windows.   It   doesn't   really   do   much.   In   windows   3.x   there   was   a   little   box   with   a  
horizontal   bar   to   denote   the   system   menu.   Nowadays,   after   Windows   95,   it's   replaced   by   the  
program's   icon.   Windows   puts   this   menu   on   top-­‐level   and   MDI   windows,   so   the   application  
designer  doesn't  really  have  much  choice  about  it.  

Menu  Item  Variation  

  Disabling  Menu  Items:  A  defined  windows  standard  is  to  disable  or  gray  out;  menu  items  
when   they   are   not   relevant   to   the   selected   data   items.   Menu   have   robust   facilities   that   make   it   easy  
to  gray  them  out  when  not  required.  The  user  will  be  well  served  to  know  about  what  they  can  do  
and  what  they  can't.  

Disable  menu  items  when  they  are  irrelevant.  

  Cascading  Menus:  This  is  a  variant  of  menus  where  a  secondary  menu  can  be  made  to  pop  
up  along  side  a  top-­‐level  popup  menu.  This  technique  is  called  cascading  menus.  

Popup   menus   provides   nice,   monocline   grouping,   cascading   menus   move   the   user   into   the   nasty  
territory  of  nesting  and  hierarchies.  The  desire  to  make  menus  hierarchical  is  nearly  unavoidable.  

Flip-­‐flop   Menu:   If   the   menu   choice   offered   is   a   binary   one,   i.e.   one   that   will   be   in   either   of   two  
states.  If  we  have  to  create  two  items  “Display  tools”  and  “Hide  tools”  then  this  can  be  included  with  
a  single  menu  space  which  will  show  one  at  a  time.  This  type  of  menu  alternates  between  two  menu  
values,  always  showing  the  one  which  is  currently  not  chosen.  

The  Flip-­‐Flop  save  the  space  because  two  menu  items  can  be  included  at  one  place.  

Graphics  on  menus  

Visual   symbols   next   to   the   text   items   help   the   user   to   differentiate   between   them   without   having   to  
read  so  the  items  are  understood  faster.  This  could  speed  the  user  up.  They  also  provide  a  helpful  
visual   connection   to   other   gizmos   that   do   the   same   task.   The   menu   item   should   show   the   same  
graphics  as  the  toolbar  buttcons.  

sarojpandey.com.np     67  of  90  


Visual Programming Language and .NET BIT III

Bang  Menu  Item  

Some  top  level  menu  items  on  the  horizontal  bar  behaves  like  an  immediate  menu  item  on  a  popup;  
rather   than   displaying   a   popup   menu   for   a   subsequent   selection,   the   immediate   item   causes   the  
function   to   be   executed   immediately.   For   example,   an   immediate   menu   item   to   compile   some  
source   code   would   be   called   “Compile!”,   the   exclamation   mark   is   a   “bang”   and   by   convention   top  
level  immediate  menu  items  were  always  followed  by  bang.  

Its   behavior   is   so   unexpected   that   it   usually   generates   instant   anger.   The   bang   menu   item   has  
virtually  no  instructional  value.  It  is  dislocating  and  disconcerting.  Buttcons  on  a  toolbar  behave  like  
bang   menu   items:   they   are   immediate   and   top   level.   The   difference   is   that   buttcons   on   a   toolbar  
advertise  their  immediacy  because  they  are  buttcons.  

Accelerators  

Accelerator   provides   an   additional,   optional   way   to   invoke   a   function   from   the   keyboard.  
Accelerators  are  the  keystrokes,  which  usually  are  function  keys  or  are  activates  with  'CTRL',  'ALT'  
or   'SHIFT'   prefix   and   are   shown   in   the   right   side   of   the   popup   menu   items.   They   are   defined  
windows  standard,  but  implementation  is  up  to  the  individual  designer.  

Tips  for  good  accelerators  

− Follow  Standards  

− When   the   standard   accelerator   exists   the   developer   need   to   use   them.   User   quickly   learn   how  
much   easier   it   is   to   type   CTRL+C   and   CTRL+V   for   copy   and   paste.   So   in   new   program   also   we  
have  to  use  the  same.  If  we  use  CTRL+S  for  copy,  CTRL+P  for  paste  then  users   will   be   in   trouble.  
Use  CTRL+S  and  CTRL+P  for  Save  and  Print  respectively.  

− Provide  for  their  daily  use  

− Identifying  the  set  of  commands  that  will  comprise  those  which  are  frequent  in  daily  use  is  very  
tricky.  The  developers  have  to  analyze  that  properly  and  provide  the  accelerators  for  those  tasks.  
This   may   create   problem   because   the   requirements   may   differ   from   the   views   of   developers   and  
the  user  and  other  users.  

− Show  how  to  access  them  

− The   implemented   accelerators   need   to   be   shown   properly   in   the   menu   items   because  
the  accelerator  won't  do  any  good  if  the  user  has  to  go  to  the  manual  and  find  what  are  

sarojpandey.com.np     68  of  90  


Visual Programming Language and .NET BIT III

the   accelerators   available.   So   we   need   to   put   them   in   the   menu   on   the   right   side,   user  
will  be  happy  seeing  them.  

Mnemonics  

Mnemonics   are   windows   standard   for   adding   keystroke   commands   in   parallel   to   the   direct-­‐
manipulation  of  menus  and  dialogs.  

Mnemonic  are  the  underlined  letters  in  a  menu  item.  Entering  that  letter  shifted  with  the  ALT  key  
activates  it  immediately  which  executes  the  menu  command.    The  main  purpose  of  the  mnemonics  
is   to   provide   a   keyboard   equivalent   of   each   menu   command.   That’s   why   mnemonics   should   be  
complete  for  text-­‐oriented  applications.    

Mnemonics  are  not  optional.    

Dialog  Box  

Dialog  boxes  are  not  part  of  the  main  program.  If  the  program  is  kitchen,  the  dialog  box  is  pantry.  

A  dialog  box  is  a  secondary  window  that  allows  users  to  perform  a  command,  asks  users  a  question,  
or  provides  users  with  information  or  progress  feedback.  

A   dialog   box   is   similar   to   a   popup   window   that   contains   controls.   Dialog   box   is   used   to   display  
information   or   request   information   from   the   user.   The   main   difference   between   dialog   boxes   and  
pop-­‐menu   windows   is   that   dialog   boxes   uses   templates   that   define   the   controls   created   on   the  
dialog   box.   These   templates   can   be   dynamically   created   in   memory   while   parent   application  
executes.  

Dialog  boxes  consist  of  a  title  bar  (to  identify  the  command,  feature,  or  program  where  a  dialog  box  
came   from),   an   optional   main   instruction   (to   explain   the   user's   objective   with   the   dialog   box),  
various  controls  in  the  content  area  (to  present  options),  and  commit  buttons  (to  indicate  how  the  
user  wants  to  commit  to  the  task).  

Dialog   boxes   are   superimposed   over   the   main   window   of   the   owner   program.   The   dialog   box  
engages   the   user   in   a   conversation   by   offering   information   and   requesting   some   input.   When   the  
user   has   finished   viewing   or   making   some   changes   in   the   dialog   box   he   can   accept   or   reject   it.   After  
the  completion  of  the  task  dialog  box  disappears.  

Some   application   may   use   menu   as   the   primary   interface   of   the   GUI   but   by   doing   this   the   user   have  
to  switch  between  the  main  window  and  the  dialog  box  frequently.  

sarojpandey.com.np     69  of  90  


Visual Programming Language and .NET BIT III

The   dialog   boxes   are   appropriate   for   any   functions   that   are   out   form   the   main   flow   of   the   program.  
Anything   that   is   confusing   and   rarely   used   is   implemented   in   a   dialog   box.     Dialog   boxes   are   also  
well   suited   for   concentrating   information   related   to   a   single   subject,   such   as   the   properties   of   an  
object  in  an  application  e.g.  an  invoice.  

Most   dialog   boxes   are   invoked   from   the   menu   item,   so   there   is   a   natural   kinship   between   menus  
and  dialogs.  As  the  menu  provides  the  pedagogic  command  vector,  Dialog  boxes  are  also  the  part  of  
them.  

There   can   be   two   different   types   of   dialog   box   users:   the   frequent   user   who   is   familiar   with   the  
program   and   uses   them   to   control   its   more   advanced   or   dangerous   facilities;   and   the   infrequent  
user   who   is   unfamiliar   with   the   scope   and   use   of   the   program   and   is   learning   to   use.   This   dual  
nature  means  the  dialog  boxes  must  be  speedy,  powerful,  compact  and  smooth  and  they  also  need  
to  be  self-­‐explanatory.  

Most  dialog  boxes  have  buttons,  combo  boxes  and  other  gizmos  in  their  surface  and  are  managed  
properly   as   per   the   size   of   the   dialog.   The   dialog   box   may   or   may   not   have   a   caption   bar   or   thick  
frame.  

The  dialog  box  is  always  a  child  window  so  it  must  have  an  owner,  and  normally  the  owner  is  an  
application  program  but  it  can  be  the  windows  system  itself.  Dialog  box  are  usually  placed  on  the  
top  of  owner  program  although  the  windows  of  other  programs  may  obscure  them.  

Every  dialog  box  has  at  least  one  terminating  command  which  causes  the  dialog  box  to  shutdown.  
Generally  most  of  the  dialog  box  offers  two  buttons  OK  and  CANCEL  although  the  close  box  of  the  
upper  left  corner  can  also  be  used.  

Dialog  Box  Types  

Modal  Dialog  Boxes  

This   is   most   common   type   of   dialog   box.   Modal   dialog   box   disables   its   owner   window   when   the  
dialog   box  is   displayed.  So  when   a  modal   dialog  is   being   displayed   user   can’t   switch   to   another  part  
of  the  same  application.  Modal  dialog  boxes  require  users  to  complete  and  close  before  continuing  
with   the   owner   window.   These   dialog   boxes   are   best   used   for   critical   or   infrequent,   one-­‐off   tasks  
that  require  completion  before  continuing.  

Once  the  box  comes  up  the  owning  program  cannot  continue  until  the  dialog  box  is  closed.  It  stops  
the   proceeding.   Clicking   on   any   other   part   of   the   window   belonging   to   the   program   will   give   a   rude  

sarojpandey.com.np     70  of  90  


Visual Programming Language and .NET BIT III

beep  to  the  user.  Everything  is  deactivated  for  the  duration.  The  user  can  activate  other  programs  
but  the  dialog  box  of  previous  program  will  remain  waiting.    

The  principle  of  the  modal  dialog  box  is  “Stop  what  you  are  doing  and  deal  with  me  now.  When  you  
are  done,  you  can  return  to  what  were  doing.”  

 If   the   dialog   box   is   function   oriented,   it   usually   operates   on   the   entire   program   or   on   the   entire  
active   document.   If   the   modal   box   is   process   or   property   oriented   then   it   usually   operates   on   the  
current   selection.   The   modal   dialog   box   stops   their   owning   application   so   they   are   also   called  
application  modal.  

It   is   also   possible   to   create   a   dialog   box   called   system   modal,   which   brings   all   programs   in   the  
system   to   halt.   So   no   application   program   should   ever   create   this   type   of   dialog.   Their   true   purpose  
is  to  report  catastrophic  occurrences  that  affect  the  entire  system,  such  as  hard  disk  error.  

Modeless  Dialog  Box  

Modeless  Dialog  box  does  not  disable  its  owner  window  when  it  is  created.  Displaying  a  modeless  
dialog  box  does  not  stop  the  parent  application  and  does  not  force  the  user  to  respond  to  the  dialog  
box.   Modeless   dialog   boxes   allow   users   to   switch   between   the   dialog   box   and   the   owner   window   as  
desired.  These  dialog  boxes  are  best  used  for  frequent,  repetitive,  on-­‐going  tasks.  User  can  interact  
to  both  owner  and  child  windows.  

Once   the   modeless   dialog   box   comes   up,   the   owning   program   continues   without   interruption.   It  
does   not   stop   the   proceeding,   and   the   application   does   not   freeze.   The   various   facilities   and  
controls,   menus   and   toolbars   of   the   main   program   remain   active   and   functional.   The   modeless  
dialog   box   have   terminating   commands   too,   although   the   conventions   for   them   are   weaker   and  
more  confusing  than  for  modal  dialogs.  
 
The  modeless  dialog  box  is  a  much  more  difficult  beast  to  use  and  understand,  mostly  because  the  
scope   of   it's   operation   is   unclear.   It   appears   in   a   program   after   it   is   called,   and   while   it   is   being  
displayed  the  user  can  also  work  on  the  owner  window.  
 
The  modeless  dialog  problem  
The   behavior   of   most   modeless   dialog   box   is   inconsistent   and   confusing.   They   are   visually   very  
close  to  modal  dialog  boxes  but  are  functionally  very  different.  Most  of  the  confusion  arises  because  
most  of  the  users  are  more  familiar  with  the  modal  form   –  and  because  of  inconsistencies  that  arise  

sarojpandey.com.np     71  of  90  


Visual Programming Language and .NET BIT III

in   the   way   the   user   work   on   them.   When   the   dialog   box   is   shown   the   user   assume   that   it   is   a   modal  
dialog   box   and   has   a   modal   behavior   but   it   may   be   the   modeless.   If   it   is   modeless,   users   must  
tentatively  poke  at  it  to  find  out  how  it  behaves.  This  confusion  is  because  of  the  user  familiarities  
with  the  modal  dialog.  
 
For  example:  In  Word  the  user  can  request  the  modeless  find  dialog  box  from  the  edit  menu  and  he  
can   also   request   font   dialog   from   the   format   menu,   which   is   modal.   While   both   of   these   are   opened  
a  modal  dialog  box  sits  on  top  of  modeless  box.  The  modeless  find  dialog  box  is  function  oriented  
where   the   modal   font   dialog   box   is   property   oriented.   Functionally   there   is   nothing   wrong   but  
visually  it  is  nonsensical  placement  of  unrelated  dialogs.  
 
Two  Solutions  
The  Evolutionary  Solution  
In   this   solution   the   modeless   dialog   box   is   left   pretty   much   same   as   they   are.   But   two   principles   are  
applied  to  them.  
1. We  must  visually  differentiate  modeless  dialog  from  the  modal  one.  
2. We  must  adopt  consistent  and  correct  convention  for  the  terminating  commands.  
3.  
When  the  windows  API  is  used  to  create  the  modeless  dialog  box  then  it  will  not  be  different  from  
the  modal  dialog.  So  to  make  difference,  the  programmer  must  break  this  habit.  The  designer  must  
make  noticeable  change  in  the  dialog  box.  The  good  method  could  be  distinctive  background  color  
or  adding  pattern  or  image.  Colored  border  can  also  be  used.  Buttons  can  be  changed  to  appear  like  
buttcons  and  their  color  and  font  can  also  be  changed.  
 
Not  only  the  appearance  we  and  also  differentiate  the  modeless  dialog  box  visually:  we  can  make  
change   in   orientation,   the   looks   of   caption   bar   can   also   be   changed,   and   we   can   add   symbols   or  
animated  graphics  to  differentiate  it  from  the  modal.  
 
Another   area   to   follow   the   convention   is   the   terminating   commands.   Currently   each   vender   or   each  
developer  use  their  own  way:  Some  say  CLOSE,  some  say  APPLY,  some  use  DONE  while  some  use  
DISMISS,  ACCEPT,  YES  or  even  OK.  This  endless  variety  is  problematic.  Terminating  the  modeless  
dialog   box   should   be   simple,   consistent   easy   and   very   similar   (may   be   not   the   same)   from   program  
to  program.  
 

sarojpandey.com.np     72  of  90  


Visual Programming Language and .NET BIT III

The   developers   change   the   legend   form   CANCEL   to   APPLY,   or   CANCEL   to   CLOSE   depending   on  
whether   the   user   has   taken   action   with   the   modeless   dialog   box.   But   the   legend   shouldn't   be  
changed.  If  the  user  hasn't  selected  the  valid  option  but  presses  OK  anyway,  the  dialog  box  should  
assume   that   the   user   means   'dismiss   the   box   without   taking   any  action’.   Modal   dialog   box   gives   the  
option  to  cancel  the  job  with  CANCEL  button  but  modeless  boxes  normally  don't  offer  that.  
 
In   modal   dialog   box   OK   means   'accept   the   user   input   and   close   the   dialog'   and   CANCEL   means  
'abandon  the  input  and  close  the  dialog'  but  there  is  not  any  defined  concepts  for  modeless  dialog.  
 
The   only   consistent   terminating   action   for   modeless   dialog   boxes   is   CLOSE   or   GO   AWAY.   Every  
modeless  dialog  should  have  the  CLOSE  button  placed  in  the  consistent  location  like  the  lower  right  
corner.  And  it  should  be  implemented  by  all  programs  with  out  changing  the  caption  i.e.  CLOSE.  
 

Dialog   boxes   may   have   many   other   buttons   which   invokes   the  functions.   The   dialog   box   should   not  
close   when   any   one   of   the   function   button   is   activated.   It   stay   around   for   the   repetitive   use   and  
should   only   be   closed   when   CLOSE   button   is   pressed.   Modeless   dialog   box   should   also   be  
conservative   of   pixels.   They   stay   on   the   screen   occupying   the   front   and   center   location   so   they  
mustn't  waste  any  extra  pixel  from  on  the  screen.  
 
Another  Better  COOPER  Option  
Cooper   suggest   the   use   of   toolbar   with   the   facilities   of   modeless   dialog   box,   means   it   will   be  
permanently   attached   to   the   top   of   the   program's   main   window.   The   modelessness   of   toolbar  
buttcon   is   perfectly   acceptable   because   they   are   not   delivered   in   the   familiar   visual   form   of   the  
dialog.   They   are   visually   presented   but   are   clearly   different.   The   buttcons   are   happy   tools   on   the  
toolbar.   For   example   in   MS   Word,   we   select   something   press   the   ITALIC   I   buttcon;   we   can   select  
another  text  and  do  the  same.  There  is  no  trouble  doing  these  tasks  again  and  again.  
 
Toolbars  are  just  as  modeless  dialog  boxes,  but  they  don't  introduce  the  confusion  that  the  dialogs  
do.   They   offer   two   characteristics,   which   modeless   dialog   boxes   don’t.   They   are   visually   different  
form  the  dialog  box  and  they  have  a  consistent  idiom  for  coming  and  going.  
 
If   modeless   dialog   box   look   very   different   form   the   modal   dialog   box   then   it   will   solve   half   of   the  
problem  but  making  the  toolbar  with  that  modeless  dialog  box  items  accomplishes  almost  all.  
 

sarojpandey.com.np     73  of  90  


Visual Programming Language and .NET BIT III

The   modeless   dialog   is   free   floating   and   the   user   can   place   it   anywhere   on   the   screen   as   they   want.  
And   now   a   days   the   same   feature   is   also   implemented   in   the   toolbar   too:   which   is   called   floating  
toolbar  or  a  floater.  A  floater  is  not  docked  on  the  program  window.    A  floater  looks  similar  like  a  
docked   toolbar   but   it   has   thick   frame   for   resizing   and   it   also   has   a   mini-­‐caption   bar.   A   mini-­‐caption  
bar  is  a  caption  bar  whose  size  is  not  big  like  that  of  regular  caption  bar,  it  is  about  half  the  height.    
E.g.  Visual  Basic's  Tool  Palette.  
 
In  all  of  the  Microsoft's  program  there  is  facility  to  click  and  drag  the  tool  bar  and  pull  it  from  the  
edge   of   the   program.   They   had   made   the   floating/docking   toolbar   idiom   a   standard   in   the   new  
releases  of  Office  Suite  also.  
 
Different  types  of  Dialogs  in  windows  
ñ Property  Dialog  Box  
  A   Property   dialog   box   present   the   user   with   the   settings   or   characteristics   of   a   selected  
object  and  enables  the  user  to  make  changes  to  these  characteristics.  The  characteristic  may  relate  
to  the  entire  application  or  document  rather  than  just  a  object.  
  E.g.  Font  Dialog  Box  in  Ms.  Word,  from  where  the  user  can  change  the  values  of  Font,  FontStyle,  Color,  Size  etc.  
 
ñ Function  Dialog  Box  
Function   Dialog   boxes   are   usually   summoned   from   the   menu.   They   are   most   frequently   modal  
dialog  boxes,  and  they  control  a  single  function  like  printing,  inserting  or  spell  checking.  Function  
Dialog  Boxes  not  only  allow  the  user  to  launch  an  action,  but  they  often  enable  the  user  to  configure  
the  details  of  the  action's  behavior.  For  example:  when  the  user  requests  printing,  the  user  uses  the  
print   dialog   box   to   specify   which   page   to   print,   number   of   copies,   the   printer   to   print   etc.   The  
terminating  OK  button  on  the  dialog  not  only  closes  the  box  but  it  also  initiates  the  print  operation.  
This  technique  combines  two  functions:  configure  the  function  and  invoke  it.  Just  because  the  user  
configure   the   printing   procedures   doesn't   means   he   want   to   print   so   its   better   to   separate   these  
two.  
 

ñ Bulletin  Dialog  Box  


The  bulletin  dialog  box  is  a  devilishly  simple  little  artifact  that  is  arguably  the  most  abused  part  of  
the  graphic  user  interface.  
The   bulletin   is   best   characterized   by   the   ubiquitous   error   message   box.   There   are   well-­‐defined  
conventions   for   how   these   dialogs   should   look   and   work,   primarily   because   the   MessageBox   call  
has  been  in  the  windows  API.  Normally,  the  issuing  program's  name  is  shown  in  caption  bar,  and  a  

sarojpandey.com.np     74  of  90  


Visual Programming Language and .NET BIT III

very   short   text   message   is   displayed   in   the   body.   A   small   graphics   that   indicates   the   class   of   the  
problem   along   with   buttons   are   also   included.   Both   property   and   function   dialog   boxes   are  
requested   by   the   user,   they   serve   the   user.   On   the   other   hand   bulletins   are   always   issued   by   the  
program,  they  serve  the  program.  Both  error  and  confirmation  messages  are  bulletins.  
 
ñ Process  Dialog  Box  
Process   dialog   boxes   are   like   Bulletins   and   they   are   activated   at   the   program's   discretion   rather  
than   at   the   user's   request.   They   indicate   to   the   user   that   the   program   is   busy   with   some   internal  
function   and   that   is   has   become   foolish.   The   process   dialog   box   alerts   the   user   to   the   program's  
inability   to   respond   normally.   It   warns   the   user   to   be   overcome   with   impatience   and   to   resist  
banging  on  the  keyboard  to  get  the  program's  attention.  
 
Software’s   that   makes   significant   use   of   slower   hardware   like   networks,   disks,   or   tapes   will   be  
foolish  more  frequently.  
 
The  process  dialog  should  perform  four  tasks:  
ñ Make  clear  to  the  user  that  a  time-­‐consuming  process  is  happening.  
◦ The  mere  presence  of  process  dialog  satisfies  this  requirement,  alerting  the  user  to  the  fact  
that  some  process  is  occurring.  
 
ñ Make  clear  to  the  user  that  things  are  completely  normal.  
◦ Its   tough   to   fulfill   this   requirement.   The   program   can   crash   and   leave   the   dialog   box   up,    
lying   mutely   to   the   user   about   the   status   of   the   operation.   The   process   dialog   box   must  
continually   show,   via   time   related   movement   that   shows   things   are   progressing  
normally.A  static  dialog  box  that  announces  that  the  computer  is  reading  form  disk  may  
tell  user  that  the  time  consuming  process  is  happening  but  it  can't  show  whether  it  is  true  
or  not.  The  best  way  to  show  the  process  is  normal  is  with  the  help  of  some  animations  like  
the   copy,   cut   or   delete   dialog   box   shown   in   the   windows.Those   effect   are   remarkable,   it  
makes  sense.  
 
ñ Make  clear  to  the  user  how  much  more  time  the  process  will  consume.  
◦ Use   of   some   kind   of   progress   bar   satisfies   this   requirement.   Animated   countdown   can   be  
another  best  option.  
 

sarojpandey.com.np     75  of  90  


Visual Programming Language and .NET BIT III

ñ Provide  a  way  for  the  user  to  cancel  the  operation  


◦ The   copy,   cut   or   delete   dialog   box   have   a   CANCEL   button   which   helps   to   skip   the  
process.  This  fulfills  the  requirement  to  cancel  the  operations.  
 
Dialog  Box  Conventions  
The  Caption  Bar  
If   the   dialog   box   doesn't   have   caption   bar   it   cannot   be   moved.   All   the   dialog   boxes   should   be  
movable  so  they  don't  obscure  the  contents  of  the  windows  they  overlap.  Therefore,  all  dialog  boxes  
should  have  caption  bars.    
 
There   seems   to   be   some   belief   that   system   modal   messages   don't   need   to   have   the   caption   bars,  
because   they   are   often   used   to   report   fatal   errors.   In   this   regard,   programmer   may   give   reason  
saying  'The  system  is  crashed,  So  why  to  let  the  user  to  move  the  dialog?',  but  it  better  for  the  user  
interface   to   show   the   good   looking   dialog   box   in   that   case   too   otherwise   user   may   be   irritated   with  
the  rude  dialog  box.  
 
There   is   also   some   confusion   on   what   to   show   in   the   caption   bar?   Some   recommend   to   show   the  
name  of  function  while  other  think  it  would  be  better  to  put  the  name  of  the  program.  Then  what  to  
show  in  the  caption  bar?  None?  OR  Both?  Cooper  recommends  none  of  these  two.  
 
If  the  dialog  box  is  a  function  dialog,  then  the  caption  bar  should  have  the  name  of  the  function  -­‐  the  
verb.   For   example:   If   the   user   request   'Break'   from   the   'Insert'   Menu,   the   caption   bar   should   say  
'Insert   Break'.   It   denotes   that   user   us   inserting   break.   If   the   caption   is   just   'Break',   it   make   make  
some  meaning  that  the  user  is  breaking  but  that  not  true.  
 
The  caption  bar  should  indicate  what  is  selected  to  the  best  of  its  ability.  For  example,  if  you  select  a  
sentence   'KCC   ROCKS!’   and   invoke   the   'Font'   from   format   menu,   it   show   a   dialog   and   it's   caption  
should  say  “Format  font  for  KCC  ROCKS!”  If  the  selected  text  is  long  then  the  caption  should  show  
some  parts  of  it.  If  nothing  is  selected  then  the  caption  should  say  'Format  font  for  future  text.'  
 
If   the   dialog   box   is   a   property   dialog   box,   the   caption   should   have   the   name   or   description   of   the  
object   whose   properties   is   being   set.   When   the   user   request   the   properties   dialog   box   for   a  
directory  named  'SAR0Z',  then  the  caption  should  say  'Properties  of  SAROZ'.  
 

sarojpandey.com.np     76  of  90  


Visual Programming Language and .NET BIT III

Transient  Posture  
If   dialog   box   were   an   independent   program,   they   would   be   transient   posture   programs.   The   user  
expects   dialog   boxes   should   look   and   behave   like   transient   programs,   with   bold,   visual   idioms,  
bright   colors   and   large   buttons.   Transient   programs   borrow   their   pixels   from   sovereign  
applications,   so   they   must   never   waste   the   pixels.   Individual   gizmos   in   a   dialog   box   can   be   made  
slightly  larger  just  to  make  sure  that  the  dialog  doesn't  waste  additional  space.  
Dialog  should  be  as  small  as  possible,  but  no  smaller.  
 
Borland   International   Popularized   a   standard   by   creating   extra-­‐large   buttcon   with   bitmapped  
symbols   on   their   face:   a   large   red   “X”   for   cancel,   a   large   green   checkmark   for   OK,   and   a   big   blue  
question  mark  for  HELP.  They  were  cleverly  designed  and  attractive.  
 
Reduce  Exercise  
Dialog  boxes  can  be  a  burden  on  the  user  if  they  require  a  lot  of  excise-­‐unnecessary  overhead.  The  
user  will  soon  tire  of  having  to  always  reposition  or  reconfigure  a  dialog  box  every  time  it  appears.  
 
The  duty  of  dialog  box  designer  is  to  assure  that  the  excise  is  kept  to  a  bare  minimum,  particularly  
because  dialog  boxes  are  only  supporting  actors  in  the  interactive  drama.  
 
The   most   usual   areas   where   dialog   boxes   fail   to   reduce   are   in   their   geographical   placement   and  
their   state.   Dialogs   should   always   remember   where   they   were   placed   the   last   time,   and   they   should  
return  to  the  place  automatically.  Most  dialogs  start  fresh  each  time  they  are  invoked,  remembering  
nothing  from  their  last  run.  This  is  an  artifact  of  the  way  they  are  implemented:  as  subroutines  with  
dynamic  storage.  We  should  not  let  these  implementation  details  so  deeply  affect  the  way  programs  
behave.  Dialogs  should  always  remember  what  state  they  were  in  the  last  time  they  were  invoked  
and  return  to  the  same  state.  
 
Know  if  you  are  needed  
The  most  effective  way  that  a  dialog  box  can  reduce  excise  is  to  not  even  bother  appearing  if  it  is  not  
needed.   If   there   is   some   way   for   the   dialog   box   to   be   smart   enough   to   know   whether   it   is   really  
necessary,  the  program  should  –  by  all  means  –  determine  this  and  prevent  the  user  from  having  to  
merely  dismiss  the  unneeded  box;  an  action  that  is  pure  excise.    
E.G.  In  Microsoft  word,  the  user  always  save  the  document  just  before  they  print  it,  and  they  often  
print   it   just   before   closing   it.   It   means,   the   user   frequently   want   to   SAVE,   PRINT   and   CLOSE   the  

sarojpandey.com.np     77  of  90  


Visual Programming Language and .NET BIT III

document.  Unfortunately  the  repagination  involved  in  printing  inadvertently  marks  the  document  
as   changed.   This   means   that   the   program   always   asks   the   user   to   save   when   CLOSE   command   is  
executed  even  though  they  just  did  it.  The  program  should  pay  attention!  Of  course,  the  user  want  
to  save  the  document  before  closing.  Not  only  should  it  not  ask  this  question  at  all,  it  should  be  able  
to  see  from  the  user's  actions  that  they  didn't  change  it,  the  program  did.  The  entire  invocation  of  
this  dialog  is  excise.  
 
Terminating  Commands  for  modal  dialog  box  
Every   modal   dialog   box   has   one   or   more   terminating   commands.   Most   modal   dialog   boxes   have  
three:   The   OK,   CANCEL   &   Close   box   in   caption   bar.   The   OK   button   means,   “accept   any   changes   I  
have   made,   then   close   the   dialog   and   go   away.”   The   CANCEL   button   means,   “reject   any   changes   I  
have  made,  then  close   the  dialog   and  go   away.”   This   is   such   a   simple   and   obvious   formula   and  well-­‐
established  standard.  
 
The   modal   dialog   box   makes   a   contract   with   the   user   that   it   will  offer   services   on   approval   -­‐   the   OK  
button  -­‐  and  a  bold  and  simple  way  to  get  out  without  hurting  anything  –  the  CANCEL  button.  These  
two   buttons   cannot   omitted   without   violating   the   contract,   and   doing   so   deflates   any   trust   the   user  
might  have  had  in  the  program.  It  is  extremely  in  terms  of  stretching  this  user's  tolerance.  So  the  
designer  should  not  omit  these  two  buttons.  
 
Offer  OK  and  CANCEL  buttons  on  all  modal  dialog  boxes.  
 
The  design  tip  “Offer  OK  and  CANCEL  buttons  on  all  modal  dialog  boxes”  applies  to  functions  and  
property   types.   Bulletin   dialogs   reporting   errors   can   get   away   with   just   a   OK   button.   Process  
dialogs  only  need  a  CANCEL  button  so  the  user  can  end  a  time  consuming  process.  
 
The  OK  and  CANCEL  buttons  are  the  most  important  controls  on  any  dialog  box,  these  two  buttons  
must   be   immediately   identifiable   visually,   standing   out   from   the   other   controls   on   the   dialog   box  
and   particularly   from   other   action   buttons.   Lining   up   several,   visually-­‐identical   buttons,   including  
OK  and  CANCEL  is  not  the  right  thing  to  do,  regardless  of  how  frequently  it  is  done.  
 
The  CANCEL  button,  in  particular,  is  crucial  to  the  dialog  box's  ability  ti  serve  its  pedagogic  purpose.  
As  the  new  user  browses  the  program,  he  will  want  to  examine  the  dialogs  to  learn  the  scope  and  
purpose,   then   CANCEL   them   so   as   not   to   get   into   any   trouble.   For   the   experienced   user,   the   OK  

sarojpandey.com.np     78  of  90  


Visual Programming Language and .NET BIT III

button   begins   to   assume   greater   import   than   the   CANCEL   button.   The   user   calls   the   dialog   box,  
makes  his  changes,  and  exists  with  a  confirming  push  of  the  OK  button.  
 
The  Close  Box  
As  the  dialog  boxes  are  windows  with  caption  bars,  they  have  another  terminating  idiom.  Clicking  
in   the   close   box   in   the   upper   right   corner   terminates   the   dialog   box.   The   problem   with   this   idiom   is  
that   the   disposition   of   the   user's   changes   is   unclear.   Were   the   changes   accepted   or   rejected?   Was   is  
equivalent  to  OK  or  a  CANCEL?  Because  of  these  confusions  the  only  one  possible  way  for  programs  
to  interpret  the  idiom:  as  CANCEL  but  this  conflict  with  the  meaning  on  modeless  dialog  box,  where  
it  is  same  as  CLOSE  command.  The  close  box  is  needed  in  modeless  dialog  box  not  in  modal  dialog  
box.  
Don't  put  close  boxes  on  modal  dialogs.  
 
Keyboard  Shortcuts  
Many   dialogs   offer   services   that   are   frequently   used   like   those   for   FIND   &   REPLACE.   Application  
users  love  the  keyboard  shortcuts  as  they  make  the  task  faster.  There  are  enough  keys  to  use  as  the  
shortcuts  but  its  better  to  use  the  mostly  used  shortcuts  in  the  new  application  too  because  user  are  
familiar  with  them.  For  example  CTRL  +  F  is  used  by  almost  application  for  FIND.  
 
Tabbed  Dialogs  
This   is   latest   user   interface   idiom.   The   tabbed   dialog   is   sometimes   called   “multi-­‐pane   dialog.”   In  
very  few  years  of  its  development  this  dialog  is  well  established.    Tabbed  dialogs  allow  all  or  part  of  
a  dialog  to  be  set  aside  in  a  series  of  fully  overlapping  panes,  each  one  with  a  protruding,  identifying  
tab.  Pressing  the  tab  brings  its  associated  pane  to  the  foreground,  hiding  the  others.  The  tabs  can  
run  horizontally  across  the  top  or  the  bottom  of  the  pane  or  vertically  down  either  side.  
 
The  tabbed  dialog  is  having  good  success  because  it  follows  the  user's  mental  model  of  how  things  
are  normally  stored:  in  a  monocline  grouping.  The  various  gizmos  are  grouped  in  several  panes.  
 
A  tabbed  dialog  allows  cramming  more  gizmos  onto  a  single  dialog  box,  but  it  doesn't  means  user  
will  find  it  better.  The  contents  of  the  various  panes  on  the  dialog  must  have  a  meaningful  rationale  
for  being  together.  Otherwise  it  will  be  good  for  programmer  rather  than  being  good  for  the  user.  
 

sarojpandey.com.np     79  of  90  


Visual Programming Language and .NET BIT III

Every   tabbed   dialog   box   is   divided   into   two   parts,   the   stack   of   panes,   which   is   called   tabbed   area  
and   the   remainder   of   the   dialog   outside   the   pane   is   called   un-­‐tabbed   area.   The   terminating   buttons  
must   be   placed   on   the   un-­‐tabbed   area.   If   the   terminating   buttons   are   placed   on   the   tabbed   area,  
even   if   they   don't   change   from   pane   to   pane,   there   meaning   is   ambiguous.   The   user   may   be  
confused   on   'If   I   press   the   cancel   button,   am   I   sending   canceling   just   the   changes   of   current   pane   or  
all  of  the  changes  made  on  all  panes?'  
'Put  terminating  buttons  in  un-­‐tabbed  area.'  
'Don't  stack  tabs.'  
 
Expanding  Dialogs  
Expanding   dialogs   'unfold'   to   expose   more   controls.   The   dialog   shows   a   button   marked   'More'   or  
'Expand',   and   when   the   user   presses   it,   the   dialog   box   grows   to   occupy   more   screen   space.   The  
newly   shown   part   of   the   dialog   box   holds   more   functionalities.   This   type   of   dialogs   are   less   used  
now  a  days  with  the  development  of  toolbars  and  tabbed  dialogs.  
 
Usually,  expanding  dialogs  allow  infrequent  or  first  time  users  the  luxury  of  not  having  to  confront  
the  complex  functionalities  that  more  frequent  users  don't  find  upsetting.  A  programmer  may  think  
of   dialog   as   being   in   either   beginner   or   advanced   mode.   When   a   program   has   one   dialog   for  
beginners  and  one  for  the  expert,  it  both  insults  the  beginners  and  hassles  the  experts.  
 
As   implemented,   most   expanding   dialogs   always   come   up   in   beginner   mode.   This   forces   the  
advanced   user   to   always   have   to   promote   the   dialog.   Why   can't   the   dialog   come   up   in   the  
appropriate   mode   instead?   It   is   easy   to   know   which   mode   is   appropriate;   it's   usually   the   mode   it  
was  left  in.  If  a  user  expands  the  dialog,  then  closes  it,  it  should  come  up  expanded  next  time.  If  it  
was   put   away   in   its   shrunken   state   last   time   then   it   should   come   up   in   the   shrunken   state.   This  
mechanism  automatically  chooses  the  mode  of  user,  rather  than  forcing  user  to  select  the  mode  of  
the  dialog  box.  
 
For   these   entire   things   to   happen   the   dialog   box   must   also   need   a   “Shrink”   button   as   well   as  
“Expand”   button.   The   most   common   way   this   is   done   is   to   have   only   one   button   but   to   make   its  
legend  change  between  “Expand”  and  “Shrink”  as  it  is  pressed.  Normally,  changing  the  legend  on  a  
button  is  weak,  because  it  gives  no  clue  as  to  the  current  state,  only  indicating  the  opposite  state.  In  
the   case   of   expanding   dialogs,   though,   the   visual   nature   of   the   expanding   dialog   itself   is   clear  
enough  about  the  state  the  dialog  is  in.  

sarojpandey.com.np     80  of  90  


Visual Programming Language and .NET BIT III

Cascading  Dialogs  
Cascading  dialogs  are  a  diabolically  simple  technique  whereby  gizmos,  usually  push  buttons,  on  one  
dialog   box   summon   up   another   dialog   box   in   hierarchical   nesting.   The   second   dialog   box   usually  
covers   up   the   first   one.   Sometimes   the   second   dialog   can   summon   up   yet   a   third   one.   This   will  
create  great  mess!  
 
It  is  simply  hard  to  understand  what  is  going  on  with  cascading  dialogs.  Part  of  the  problem  is  that  
the   second   dialog   covers   up   the   first.   That   isn't   the   big   issue   –   after   all,   combo   boxes   and   popup  
menus  do  that.  The  real  confusion  comes  from  the  presence  of  second  set  of  terminating  buttons.  
 
The  strength  of  tabbed  dialogs  is  handling  breadth  of  complexity,  while  cascading  dialogs  are  better  
suited   for   depth.   The   problem   is   that   excessive   depth   is   a   prime   symptom   of   a   too   complex  
interface.    
 
Examples:   Most   print   dialogs   allow   print   setup   dialogs   to   be   called,   and   most   print   setup   dialogs  
allow  print  driver  configurations  to  be  called.  Each  layer  of  dialog  box  is  another  layer  deeper  into  
the  process,  and  as  the  user  terminates  the  uppermost  dialog,  the  system  returns  control  to  the  next  
lower  dialog  and  so  on.  
 
Cascading   dialogs   exists   because   they   seem   natural   to   programmers   and   because   they   mirror   the  
physical  processes  underneath  them.  But  this  is  about  as  backward  a  motivation  as  one  can  have  –  
it  ignores  the  user's  goal  and  the  user's  mental  model  of  the  process.  
 
Directed  Dialogs  
Most   dialogs   are   pretty   static,   presenting   a   fixed   array   of   gizmos.   A   variant   that   is   called   directed  
dialogs  changes  and  adapts  its  suite  of  gizmos  based  on  some  user  input.    
 
E.g.  Customize  dialog  box  of  windows  word  is  typical  example  of  the  directed  dialog.  In  that  dialog  
box   depending   upon   what   user   selects   in   the   categories   listbox   gizmo,   the   groupbox   to   its   right   will  
either  be  a  collection  of  buttons  or  listbox  filled  with  macros,  font  names  or  other  items.  The  gizmos  
on  the  dialog  box  configure  themselves  in  real-­‐time  to  the  user's  actions.  
 
Programming  a  directed  dialog  can  get  complicated,  so  it  is  not  done  in  great  frequency.  
 

sarojpandey.com.np     81  of  90  


Visual Programming Language and .NET BIT III

TOOLBARS  
A  toolbar  is  a  graphical  presentation  of  commands  optimized  for  efficient  access.  Toolbars  are  the  
new   kid   on   the   idiom   block.   Toolbars   are   popular   in   Windows   now-­‐a-­‐days.   The   toolbar   has   great  
strengths   and   weaknesses,   but   they   are   complementary   to   those   of   its   partner,   the   menu.   Where  
menus  are  only  for  frequently  used  commands  and  offer  little  help  to  the  new  user.  
 
The   typical   toolbar   is   a   collection   of   buttcons,   usually   with   images   instead   of   texts   captions,   in   a  
horizontal   bar   positioned  adjacent  to  and  below  the  menu  bar.  Essentially,  the  toolbar  is   a   single,  
horizontal  row  of  immediate,  always  visible  menu  items.  
 
The   toolbar   really   gave   birth   to   the   buttcon;   a   happy   marriage   between   a   button   and   icon.   As   a  
visual  mnemonic  of  a  function,  buttcons  are  excellent.  They  can  be  hard  for  newcomers  to  interpret,  
but  then,  they're  not  for  newcomers.  
 
Great   ideas   in   user   interface   design   often   seem   to   spring   from   many   sources   simultaneously.   The  
Toolbar  is  no  exception.  It  appeared  on  many  programs  at  about  the  same  time,  and  nobody  can  say  
who  invented  it  first  time.  The  invention  of  the  toolbar  solved  the  problems  of  the  pulldown  menu.  
Toolbar   functions   are   always   plainly   visible,   and   the   user   can   trigger   them   with   a   single   mouse  
click.  The  user  doesn't  have  to  pull  down  a  menu  to  get  to  a  frequently  used  function.  
 
Toolbars  are  not  menus  
Toolbars   are   often   thought   of   as   just   a   speedy   version   of   the   menu.   The   similarities   are   hard   to  
avoid:  They  offer  access  to  the  program's  functions  and  they  form  a  horizontal  row  across  the  top  of  
the   screen.   Designers   imagine   that   toolbars   beyond   command   vector   in   parallel   to   menus,   are   an  
identical   command   vector   to   those   on   menus.   They   think   that   the   functions   available   on   toolbars  
are  supposed  to  be  the  same  as  those  available  on  menus.  
 
Toolbars  provide  experienced  user  with  fast  access  to  frequently  used  functions.  
 
The   great   strength   of   menus   is   their   completeness.   Everything   the   user   needs   can   be   found  
somewhere   on   the   program's   menus.   Of   course,   this   very   richness   means   that   they   get   big   and  
cumbersome.   To   keep   these   big   menus   form   the   ranks   of   visible   and   immediate   commands.   The  
tradeoff   with   menus   is   thoroughness   and   power   in   exchange   for   a   small   uniform   dose   of   clunkiness  
applied  at  every  step.  

sarojpandey.com.np     82  of  90  


Visual Programming Language and .NET BIT III

The  buttcons  on  toolbars,  on  the  other  hand,  are  incomplete.  They  are  space-­‐efficient,  but  they  are  
undeniably   visible   and   immediate.   They   are   very   space-­‐efficient   compared   to   menus.   A   simple,  
single   click   of   the   mouse   on   a   toolbar   buttcon   generates   instant   action.   The   user   doesn't   have   to  
search  for  the  function  a  layer  deep  in  menus  –  it's  right  there  in  plain  sight,  and  one  click  is  all  it  
takes,  unlike  the  mouse  dragging  required  by  menus.  
 
Why  not  Text?  
If  the  buttcons  on  a  toolbar  act  the  same  as  the  items  on  a  pulldown  menu,  why  are  the  menu  items  
almost  always  shown  with  text  and  the  toolbar  buttcons  almost  shown  with  little  images?  
 
Text  labels,  like  those  on  menus,  can  be  very  precise  and  clear  –  they  aren't  always,  but  precision  
and  clearly  are  their  basic  purpose.  To  achieve  this,  they  demand  that  the  user  take  time   to  focus  on  
them  and  read  them.  Reading  is  slower  than  recognizing  images.  
 
Pictorical   symbols   are   easy   for   humans   to   recognize,   but   they   often   lack   the   precision   and   clarity   of  
text.   Pictographs   can   be   ambiguous   until   the  user   learns   actual   meaning.   When   the   learning   is   done  
then  they  will  never  forget  it,  recognition  remains  but  the  text  need  to  read  time  and  again.  
 
Buttcons  have  all  of  the  immediacy  and  visibility  of  buttcons,  along  with  fast  recognition  capability  
of  images.  They  pack  a  lot  of  power  into  very  small  space.  As  usual,  their  great  strength  is  also  their  
great  weaknesses:  the  image  part.  
 
Relying   on   pictographs   to   communicate   is   all   right   as   long   as   the   parties   have   agreed   in   advance  
what   the   images   means.   They   must   do   it   because   the   meaning   cannot   be   guaranteed   to   be  
unambiguous.  
 
Many   designers   think   they   must   invent   visual   metaphors   for   buttcons   that   adequately   convey  
meaning  to  first-­‐time  users.  This  is  a  quixotic  quest  that  not  only  reflects  a  misunderstanding  of  the  
purpose  of  toolbars,  but  reflects  the  futile  hope  for  magical  powers  in  metaphor.  
 
The  image  on  the  button  doesn't  need  to  teach  the  user  its  purpose;  it  merely  needs  to  have  a  bold  
and  visual  identity.  The  user  will  have  already  learned  its  purpose  through  other  menus.  It  is  lot's  
easier   to   find   images   that   represents   the   things   than   it   is   to   find   images   that   represents   actions   and  

sarojpandey.com.np     83  of  90  


Visual Programming Language and .NET BIT III

relationships.  A  picture  of  a  trash,  printer  are  easy  to  understand,  it  is  difficult  to  find  suitable  icons  
for  'Apply',  'Cancel',  'Adjust'  etc.  
 
The   user   may   also   find   himself   wondering   on   what   a   picture   of   a   printer   means?   He   may   think   that  
icon  for  may  purpose  related  to  the  printer  like  'Printer  Settings',  'Status  of  a  Printer'  etc.  but  once  
he  have  learned  that  its  for  printing  a  copy  on  active  printer,  he  will  never  be  troubled  later  with  
that  icon.  
 
The  Problem  with  using  BOTH  
It  might  seem  good  idea  to  use  both  (text  +  image)  for  the  representation.  The  original  Macintosh  
desktop  had   text   subtitles   in  each   icon.   Icons   are   useful   for   allowing  classification,  but  beyond  that,  
we  need  text  to  tell  exactly  what  the  object  is  for.  
 
The  problem  is  that  using  both  text  and  images  is  very  expensive  in  terms  of  pixels.  Besides,  toolbar  
functions   are   often   dangerous   or   dislocating,   and   offering   too   easy   access   to   them   can   be   like  
leaving  a  loaded  piston  on  the  coffee  table.  The  toolbar  is  for  users  who  know  what  they  are  doing  
and  the  menu  is  for  the  rest.  
 
Some   designers   may   add   text   to   buttcons   either   on   right   side   or   left   of   below.   This   is   worst   kind,  
because  that  technique  takes  much  space.  Those  designers  are  trying  to  satisfy  two  kind  of  users:  
One  wants  to  learn  in  a  gentle,  forgiving  environment  and  Another  knows  where  the  sharp  edges  
are  but  sometimes  needs  a  brief  reminder.  The  windows  application  designer  must  find  the  bridge  
which  need  to  solve  the  clash  between  those  two  kind  of  users.  
 
Immediate  Behavior    
Unlike   menus   we   don't   depend   on   toolbar   buttcons   to   teach   how   they   are   used.   Although   we  
depend   on   buttcons   for   speed   and   convenience,   their   behavior   should   not   mislead   us.   Toolbar  
buttcons  should  become  disabled  if  they  are  no  longer  applicable  in  current  selection.  They  may  or  
may  not  gray  out,  but  if  buttcon  becomes  moot,  it  must  not  offer  the  pliant  response:  The  buttcon  
must  not  depress.  Some  program  makes  moot  buttcons  disappear  altogether,  and  the  effect  of  this  
is  ghastly.  
-­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐-­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐  -­‐    
It  was  the  toolbar's  invention  that  finally  allowed  the  pedagogical  purpose  of  the  menu  to  emerge.  
Once   the   frequently   used   functions   were   put   into   toolbar   buttcons,   the   pulldown   menus  

sarojpandey.com.np     84  of  90  


Visual Programming Language and .NET BIT III

immediately   ceased   to   be   the   primary   function   idiom.   For   users   with   even   slight   experience,   if   a  
buttcon  existed,  it  was  much  faster  and  easier  to  use  than  pulling  down  menu  and  selecting  an  item  
–   a   task   requiring   significantly   more   dexterity   and   time   than   merely   pointing-­‐and-­‐clicking   in   one  
stationary  spot.  Before  the  advent  of  the  toolbar,  the  pulldown  menu  was  home  to  both  pedagogy  
and  daily-­‐use  functionality.  Although  the  two  purposes  were  intermixed,  software  designers  didn’t  
segregate   them   into   different   idioms   until   the   toolbar   demonstrated   its   potency.   However,   once   the  
toolbar  became  widespread,  the  menu  fell  into  the  background  as  a  supporting  character.  The  only  
programs  where  the  menu  is  still  used  for  daily  use  functions  are  programs  with  poorly  designed  or  
non-­‐existent  toolbars.  
 
ToolTips  
The  big  problem  with  toolbar  buttcons  is  that  although  they  are  fast  and  memorable,  they  are  not  
decipherable.  How  is  the  new  user  supposed  to  learn  to  use  buttcons?  
 
Macintosh  was  the  first  to  introduce  a  facility  called  balloon  help.  
 
Balloon   help   is   one   of   those   frustrating   that   everyone   can   clearly   see   is   good,   yet   nobody   uses   it.  
Balloon   help   is   a   rollover   facility.   This   means   that   it   appears   as   the   mouse   cursor   passes   over  
something  without  the  user  pressing  a  mouse  button,  similar  to  active  visual  hinting.  
 
Balloon  help  is  active,  little  speech  bubbles  like   those  in  comic,  which  appear  next  to  the  object  that  
the  mouse  points  to.  
 
The   balloon   help   doesn't   work   for   a   couple   of   good   reasons.   Primarily,   it   is   founded   on   the  
misconception   that   is   acceptable   to   discomfit   daily   users   for   the   benefit   of   the   first-­‐timers.   The  
balloons  are  too  big,  too  long,  too  obtrusive  and  too  condescending.  They  are  very  much  in  the  way.  
Most  users  find  them  so  irritating  that  they  keep  it  always  off  then  if  they  have  forgotten  what  some  
object  is,  they  have  to  go  up  to  the  menu,  pull  it  down,  turn  balloon  on,  point  to  the  unknown  object,  
read  the  balloon,  go  back  to  the  menu,  and  turn  the  balloon  help  off.  Uffff!  Much  Pain!  :(  
 
Microsoft,  on  the  other  hand,  is  never  one  to  make  things  easy  for  the  beginner  at  the  expense  of  the  
more  frequent  user.  They  have  invented  a  variant  of  balloon  help  called  ToolTips  that  is  one  of  the  
cleverest  and  most  effective  user  interface  idiom.  
 

sarojpandey.com.np     85  of  90  


Visual Programming Language and .NET BIT III

ToolTips   seems   same   like   balloon   help,   but   there   are   minor   physical   differences   that   have   huge  
effect  in  the  user's  point  of  view.  ToolTip  only  explains  the  purpose  of  gizmos  on  the  toolbar.  They  
don't   describe   other   stuffs.   The   ToolTips   doesn't   include   very   basic   thing,   which   was   in   balloon  
help.  
 
ToolTips  contains  a  single  word  or  very  short  phrase.  They  don't  attempt  to  explain  in  prose  how  
the  object  is  used.  This  is  the  most  important  advance  that  ToolTips  have  over  balloon  help.  Apple  
wanted   their   bubbles   to   teach   things   to   the   first   time   users.   Microsoft   figured   that   first-­‐timers  
would  just  have  to  learn  the  hard  way  how  things  work.  
 
By  making  the  gizmos  on  the  toolbar  so  much  more  accessible  for  normal  users,  they  have  allowed  
the   toolbar   to   evolve   from   simply   supporting   menus.   Tooltips   have   freed   the   toolbar   to   take   the  
lead  as  the  main  idiom  for  issuing  commands  to  sovereign  applications.  This  also  allows  the  menu  
to  quietly  recede  in  to  background  as  a  command  vector  or  beginners  and  for  invoking  occasionally  
used  functions.  The  natural  order  of  buttcons  as  the  primary  idiom,  with  menus  as  a  backup,  makes  
sovereign   applications   much   easier   to   use.   For   transient   programs,   though,   most   users   qualify   as  
first-­‐time  or  infrequent  users,  so  the  need  for  buttcons-­‐shortcuts  is  much  less.  
 
ToolTip   windows   are   very   small,   and   they   have   the   presence   of   mind   of   not   obscure   important  
parts   of   the   screen.   They   appear   underneath   the   buttcon   they   are   explaining   and   label   it   without  
consuming   the   space   needed   for   dedicated   labels.   There   is   a   critical   time   delay,   about   half   a   second  
between  the  placing  the  cursor  on  a  buttcon  and  having  the  ToolTip  appear.  This  just  enough  time  
to  point  to  and  select  the  function  without  getting  the  ToolTip.  This  means  that  in  normal  use,  when  
you   know   full   well   what   function   you   want   to   which   buttcon   to   use   to   get   it,   you   can   request   it  
without  ever  seeing  a  ToolTip  window.  It  also  means  that  if  you  forget  what  a  rarely  used  buttcon  is  
for,  you  only  need  to  invest  half-­‐second  to  find  out.  
 
That  little  picture  of  a  printer  may  be  ambiguous  until  I  see  the  word  "point"  next  to  it.  There  is  now  
no   confusion   in   my   mind.   If   the   buttcon   were   used   to   configure   the   printer,   it   would   say   "Configure  
Printer"  Or  even  just  "Printer",  referring  to  the  peripheral  rather  than  to  its  function.  The  context  
tells  me  the  rest.  The  economy  of  pixels  is  superb.    

sarojpandey.com.np     86  of  90  


Visual Programming Language and .NET BIT III

ToolTip   have   completely   spoiled   me   for   anything   else.   I   now   get   upset   with   any   program   that  
doesn't  offer  them.  Toolbars  without  Tooltips  force  me  to  red  the  documentation  or,  worse,  to  learn  
their  function  by  experimentation.  And  because  toolbars  contain  immediate  versions  of  commands  
that   should   be   used   by   moderate   experienced   users,   they   inevitably   contain   some   that   are  
dislocating   or   dangerous.   Explaining   the   purpose   of   buttcons   with   a   line   of   text   on   the   status   line   at  
the  bottom  of  the  screen  just  isn't  as  good  as  ToolTips  that  appear  right  there  where  I'm  looking.  
That  cheerful  little  yellow  box  with  a  terse  word  or  two  tells  me  all  I  need,  where  I  need  it,  when  I  
need  it.    

Do  not  create  toolbars  without  ToolTip.  In  fact,  ToolTip  should  be  used  on  all  pictographic  
buttcons,  even  those  on  dialog  boxes.    

Beyond  the  buttcon  

Once   people   started   to   regard   the   toolbar   as   something   more   than   just   an   accelerator   for   the   menu,  
its  growth  potential  became  more  apparent.  Designer  began  to  see  that  there  was  no  reason  other  
than  habit  to  restrict  the  gizmos  on  toolbars  to  buttcons.    

Opening  the  door  to  other  popular  gizmos  was  just  the  beginning.  Soon  designers  began  to  invent  
new   idioms   expressly   for   the   toolbars.   With   the   advent   of   these   new   constructions,   the   toolbars  
truly  came  into  its  own  as  a  primary  control  device;  separate  from  and  in  many  cases  superior  to  
pull  down  menus.    

After  the  buttcon,  the  next  gizmos  to  find  a  home  on  the  toolbars  was  the  combo  box,  as  in  Words  
style  font  and  font  size  controls.  It  is  perfectly  natural  that  these  selectors  be  on  the  toolbar.  They  
offer   the   same   functionality   as   those   on   the   pulldown   menu,   but   they   also   offer   a   more   object-­‐
oriented  presentation  by  showing  the  current  style,  font  and  fonr  size  as  a  property  to  the  current  
selection.  The  idiom  delivers  more  information  in  return  for  less  effort  by  the  user.  

Once   combo   boxes   were   admitted   into   the   toolbars,   the   precedent   was   set,   the   dam   was   broken,  
and  all  kinds  of  idioms  appeared  and  were  quite  effective.  The  original  buttcon  was  a  momentary  
buttcon   one   that   stays   pressed   only   while   the   mouse   buttcon   is   pressed.   This   is   fine   for   invoking  
functions   but   poor   for   indicating   a   setting.   In   order   to   indicate   the   state   of   selected   data,   new  
varieties  of  buttcons  had  to  evolve  from  the  original.    

sarojpandey.com.np     87  of  90  


Visual Programming Language and .NET BIT III

The   first   variant   was   a   latching   buttcon   -­‐   one   that   stays   depressed   after   the   mouse   buttcon   is  
released.    

Indicating  State  

This  variety  of  gizmos  contributed  to  a  broadening  in  the  use  of  the  toolbar.  When  it  first  appeared,  
it  was  merely  a  place  for  fast  access  to  frequently  used  function.  As  it  developed,  gizmos  on  it  began  
to   reflect   the   state   of   the   program's   data.   Instead   of   a   buttcon   that   simply   changed   a   word   from  
plain  to  italic  text,  the  buttcon  now  began  to  indicate  by  its  state  whether  the  currently  selected  text  
was   already   italicized.   The   buttcon   not   only   controlled   the   application   of   the   style,   but   it  
represented  the  status  of  the  selection  with  respect  to  the  style.  This  is  significant  move  onwards  a  
more   object-­‐oriented   presentation   of   data,   where   the   system   tunes   itself   to   the   object   that   is  
selected.  

Toolbar  Morphing  

Microsoft  has  done  more  to  develop  the  toolbar  as  a  user  interface  idiom  than  any  other  software  
publisher.  This  is  reflected  in  the  quality  of  their  products.  In  their  office  suite,  all  of  the  toolbars  are  
very  customizable.  Each  program  has  a  standard  battery  of  toolbars  that  the  user  can  choose  to  be  
visible   or   invisible.   If   they   are   visible,   they   can   be   dynamically   positioned   in   one   of   five   locations.  
They   can   be   attached   –   referred   to   as   'docked'   –   to   any   of   the   four   sides   of   the   program's   main  
window.  You  click  the  mouse  anywhere  in  the  interstices  between  buttcons  on  the  toolbar  and  drag  
it   to   any   point   near   an   edge   and   release.   The   toolbar   away   from   the   edge,   it   configures   itself   as   a  
floating   toolbar,   complete   with   mini-­‐caption   bar.   Very   clever,   but   not   as   clever   as   the  
customizability  of  the  individual  toolbars.  

Customizing  toolbars  

Microsoft   has   clearly   seen   the   dilemma   that   toolbars   represent   the   frequently   used   functions   for   all  
users,  but  that  those  functions  are  different  for  each  user.  This  conundrum  is  solved  by  shipping  the  
program   with   their   best   guess   of   what   an   average   person's   daily-­‐use   gizmos   will   be   and   letting  
others  customize  things.  This  solution  has  been  diluted  somewhat,  however,  by  the  addition  of  non-­‐
daily-­‐use  functions.  Clearly,  amateurs  got  their  hands  on  the  word  toolbar.  Its  default  buttcon  suite  
contains   functions   that   certainly   are   not   frequently   used.   Things   like   'Insert     Autotext'   or   'Insert  

sarojpandey.com.np     88  of  90  


Visual Programming Language and .NET BIT III

Excel   Spreadsheet'   sounds   more   like   marketing   features   than   practical   daily   options   for   majority   of  
users.  While  they  may  be  useful  at  times,  they  are  not  frequently  by  throngs  of  users.  

The  program  gives  the  more  advanced  user  the  ability  to  customize  and  configure  the  toolbars  to  
his   heart's   content.   There   is   a   certain   danger   in   providing   this   level   of   customizability   to   the  
toolbars,   as   it   is   quite   possible   for   a   reckless   user   to   create   a   really   unrecognizable   and   unusable  
toolbar.  

Mitigating   this   is   that   it   takes   some   effort   to   totally   wreck   things.   People   generally   won't   invest  
“some  effort”  into  creating  something  that  is  ugly  and  hard  to  use.  More  likely,  they  will  make  just  a  
few   custom   changes   and   enter   them   one   at   a   time   over   the   course   of   months   or   years.   The   toolbars  
on  my  customized  copy  of  Word  look  just  about  the  same  as  the  toolbars  on  anyone  else's,  except  
for  a  couple  of  exceptions.    

I've   added   a   smiley   face   buttcon   that   inserts   the   date   in   my   favorite   format.   I've   added   a  
buttcon  from  the  format  library  that  specifies  SMALL  CAPS,  a  format  I  seem  to  use  a  lot  mote  
than   most   people.   If   you   were   to   use   my   word   processor,   you   might   be   thrown   by   the   smiley  
face  and  the  small  caps,  but  the  overall  aspect  would  remain  familiar  and  workable.  

Of   course,   Microsoft   has   extended   the   idiom   so   that   you   can   create   your   own   completely   new,  
completely   custom   toolbars.   This   feature   is   certainly   overkill   for   normal   users,   but   corporate   MIS  
managers  might  like  it  a  lot  for  creating  that  corporate  look.  

The  favorite  part  of  the  Microsoft  toolbar  facility  is  the  attention  to  detail.  You  have  the  ability  to  drag  
buttcons   sideways   a   fraction   of   an   inch   to   create   a   small   gap   between   them.   This   allows   you   to   create  
'groups'   of   buttcons   with   nice   visual   separations.   Some   buttcons   are   mutually   the   exclusive,   so  
grouping  them  is  visual  separations.  Some  buttcons  are  mutually  exclusive,  so  grouping  them  is  very  
appropriate.   You   can   also   select   whether   the   buttcons   are   large   or   small   in   size.   This   is   nice  
compensation   for   the   disparity   between   common   screen   resolutions,   ranging   from   640X480   to  
1280X1024.  Fixed-­‐size  buttcons  can  be  either  unreadably  small  or  obnoxiously  large  if  their  size  not  
adjustable.  You  have  the  option  to  force  buttcons  to  be  rendered  in  monochrome  instead  of  color  and  
you  can  turn  ToolTips  off.  

One  of  the  criticisms  of  the  Microsoft  Toolbar  facility  is  its  scattered  presence  in  menu.  There  is  a  
“Toolbars...”  item  on  the  view  menu  that  brings  up  a  small  dialog  box  for  selecting  which  toolbars  
are  visible,  creating  new  toolbars,  turning  ToolTips  on  or  off,  turning  color  on  or  off,  and  selecting  
large   or   small   buttcons.   However   if   we   need   to   change   the   selection   of   buttcons   of   toolbar,   we   have  

sarojpandey.com.np     89  of  90  


Visual Programming Language and .NET BIT III

to   go   to   the   “Customize...”   item   on   the   “Tools”   menu,   which   brings   up   a   dialog   box   that   allows   to  
configure   the   toolbars,   the   keyboard   and   the   menu.   There   is   a   button   on   the   toolbars   dialog   that  
takes  the  user  directly  to  the  customize  dialog,  but  that  is  a  hack  compared  to  a  simple,  unified  view.  
This   design   is   result   of   either   accident   or   user   testing   rather   than   from   the   judgment   of   a   skilled  
designer.   Splitting   the   toolbar   stuff   into   two   separate   dialogs   are   irrational.   It   is   not   at   all   clear   how  
to  find  the  various  toolbar  settings.  

sarojpandey.com.np     90  of  90  

You might also like