Part II: Navigating the Twilight Zone – A futuristic view into Bi-Modal IT as relevant to Higher Education

NGSIS’ project manager, Vikram Chadalawada, discusses the new Gartner project management and governance model and how it applies to the NGSIS program. Part II of the series focuses on Mi-Modal Readiness and how to affect change within your organization. (Click here for Part I)

Bi-Modal Readiness

The way organizations view change itself has drastically been changing, right since the birth of the World Wide Web, until the present day and continues to evolve into a post nexus phase.  The dialogue below from the movie Borne Ultimatum illustrates the importance of risk management within enterprises.

Pamela Landy: “The reason Bourne went to Moscow was to see the daughter of his first target.”

Table displaing Before teh Web, Before the Nexus and After the Nexus
Course: Gartner

CIA Director Ezra Kramer: “What’s your point, Pam?”

Pamela Landy: “Maybe he was retracing his steps. Just looking for something… something in his past. Maybe he hasn’t found it yet. We need to know what it is.”

CIA Director Ezra Kramer: “You’re telling me he’s not a threat to this agency?”

Pamela Landy: “I think if he wanted to hurt us he could have sent the tape to CNN.”

CIA Director Ezra Kramer: “Maybe he still will. My number one rule is hope for the best, plan for the worst. As far as I’m concerned, Bourne’s still a serious threat, until proven otherwise.”

In order for organizations to get better at planning for the worst and mitigating risks occurring in a Bi-Modal landscape, they need to recognize that business cannot be ‘as usual’. Operating in a Bi-Modal framework requires specific methodologies to be created and implemented at an organizational level. The ‘One Size Fits All’ doctrine is bound to fail when applied to Bi-Modal enterprises. Prior to the web, Project Portfolio Management (PPM) was Episodic (i.e. one and done). Prior to the Nexus of consumerization, PPM became Serial (i.e. it was a continuous development model with cumulative scope and focus on people, process and technologies). After the Nexus, PPM will be Fluidic (i.e. a streaming development model with overlapping scope and the agility to turn around mass scale, complex products while keeping the costs contained.)

Sprinter versus Runner
Source: Gartner

Gartner analysts Frances Karamouzis and Ruby Jivan predict that, by 2018, the total cost of ownership for business operations will be reduced by 30% through smart machines and industrialized services.  This means, organizations will need to separate their traditional PPM practices from a more agile PPM framework.  One can think of PPM process within Mode 1 as a Marathon Runner and PPM process within Mode 2 as a Sprinter.

Bringing it home:

Change is happening at the speed of thought and 45% of project management failures were attributed to ineffective organizational change management. Having understood that Bi-Modal IT is a current phenomenon and neccesitates fundamental changes to operational frameworks within IT organizations, three shifts need to happen in order for change to be effective in a Bi-Modal enterprise such as U of T.

Three Fundamental Shifts requiredIT Organizations will need to move away from simply satisfying defined requirements to actually driving business outcomes while generating functional and strategic value. In other words, IT Portfolios will need to be focused on advancing ecosystems (i.e. creating interactive eco-systems of assets within a given project or a solution) as opposed to only delivering solutions on time, on budget and on scope. Finally, teams (especially ones operating within Mode 2) must have instability built into design (an example of this would be the design of an F-16 fighter aircraft which is unstable by deign giving it the agility and manoeuvrability required for its function as opposed to a commercial aircraft that is built to be stable). In order for organizations to successfully navigate through this twilight zone and

Source of Change
Source: Gartner

come out as sustainable survivors, a contextual change management framework needs to be developed for IT organizations. This framework needs to first identify the magnitude of change within each mode (i.e. Mode 1 or Mode 2) and the enterprise must come up with steps (involving people, process and technology) that will need to be established to support the entire organization through this change. Details of this change implementation will need to be created, planned and socialized within all stakeholders impacted by the change and appropriate communication mechanisms will need to be established during each phase of the overall implementation. The question then becomes, ‘How ready is the organization to embark on this epic journey’? The answer to this lies in a series of honest and transparent conversations that are needed at multiple levels and across multiple stakeholder groups within organizations resulting in a strategic plan that can then drive the change vision through to completion. As Jack Welch, the former CEO of GE said, “If the rate of change on the outside exceeds the rate of change on the inside, the end is near.”

Source: “Effective Governance of Bimodal IT Projects Requires Adopting a More Outcome-Centered Approach” by Donna Fitzgerald | Bill Swanton for Gartner, 2015 https://www.gartner.com/doc/3003717

 

Navigating the Twilight Zone: A futuristic view into Bi-Modal IT as relevant to Higher Education

Part I: In this two part series, NGSIS’ project manager, Vikram Chadalawada, discusses the new Gartner project management and governance model and how it applies to the NGSIS program

BiModal IT becoming reality
Source: Gartner

The big questions that always tend to linger within mature organizations are, “Now that we are here, what’s next? And how can we keep up with technology, especially its multi-faceted changing landscape?” Continuous change seems to be the adrenalin fix that IT organizations industry wide are injecting themselves with, especially in response to external peer pressure. The question is, how conscious is this injection? According to a Gartner research conducted earlier this year, within the next 5 years, all project management, portfolio management practitioners, IT governance executives and IT investment decision makers must come to terms with the impacts of trends that are reshaping IT landscapes. It is also predicted that third-era skills will drive out obsolete project management skills, standards and certifications, forcing organizations to invest in and reinvent their human capital. Given the evolution of IT and the post-mobile world, it is safe to assume that the aliens have landed. In fact, this can be called the age of digital business where any given transaction just doesn’t just involve people and businesses, it is becoming visibly inclusive of the Internet of Things (IOE).

Enter Bi-Modal IT

Evolutionary fluctuations within the IT industry as a whole has woken up organizations to the presence of Bi-Modal IT, a Phenomenon which has become a reality in most organizations especially within the higher education sectors and UofT is not a lone wolf in this space. Mature IT organizations tend to operate in the twilight zone working on one side within Mode 1 which primarily involves legacy systems where the development is sequential, often equated with stability, reliability and on the other side with Mode 2 that involves disruptive and cutting edge technologies where development efforts are typically exploratory, often equated with agility and flexibility. Given the dichotomy that resembles a parallel universe, Organizations are currently faced with the most challenging task of operating within this duality and this is only going to get increasingly complex with time. Also, research trends suggest that while 37% of enterprise spending on IT is currently funded outside of the IT budget, by 2017, this number will rise to 50%. All of this to say, IT leaders should be more concerned about being late to the party (i.e. incorporating digital business practices into their methodology) rather than worrying about wasting resources. It is predicted that by 2020, around 7.3 Billion personal devices will be floating around this planet and 30 Billion Internet Connected Things will emerge as compared to 1.6 Billion personal devices and 0.9 billion Internet Connected Things.

Lots of excitement right? Well, this is not the end of our ‘back to the future’ story. There are many other forces affecting the way organizations behave and respond to emerging trends. The Nexus of forces such as social media, mobile devices, cloud technologies and emergence of big data – all of which seem to move project management, portfolio management and IT in general, towards a new frontier that requires a lot of ‘rethinking the current state’. With the advent of all these disruptive forces within the industry as a whole, organizations are compelled to operate in both mode 1 and mode 2 making these universes co-exist with governing laws that are different for each mode.

Mode 1 versus Mode 2

Coexistence of Universes
Source: Gartner

If one were to produce a reality show on organizational operations, this show can easily be characterized into two sets of actors falling into the Mode 1 and Mode 2 groups. Just to be clear, the delineation between these two modes, is not intended to paint a good versus evil picture of organizational behaviour and process, rather, a means to help organizations identify where they can place themselves within an enterprise level stratosphere. So, using the Pre-digital and Digital evolution benchmarks to guide operating principles within each mode, one can establish that Mode 1 is typically Enterprise focused, belongs to the Analog/Pre-digital era and has traditional vendors implement solutions within this space while Mode 2 is dominated by Hyperscale focus and new vendors that lead solution implementation within this space.

Part II will focus on Mi-Modal Readiness and how to affect change within your organization. 

Source: “Effective Governance of Bimodal IT Projects Requires Adopting a More Outcome-Centered Approach” by Donna Fitzgerald | Bill Swanton for Gartner, 2015 https://www.gartner.com/doc/3003717

Usability Testing, Part 1: “What is it?” and “Why do it?”

This post begins a series of posts discussing usability testing: first the “what and why” of usability testing, followed by how to prepare and conduct usability tests, then finally how to capture observations and apply the important insights gleaned from usability testing.

Mobile testing that records both the test participant and her screen interactions.
Mobile testing that records both the test participant and her screen interactions.

Usability testing assesses if your product (work-in-progress or existing) will accomplish what it was set out to do in a manner that the person using it deems successful and enjoyable. The technique asks people who are or would be the real users of a product to navigate task-based scenarios representing real-world use cases of a product. This testing is administered, observed and discussed in a controlled environment, with subsequent insights captured as potential improvements to the product.

Real-world users (often students, given our U of T context) have the innate ability to identify the oversights in product design. It is important to be aware that our professional experience as U of T staff and faculty often biases us towards a fluency in jargon, business processes, institutional policies, business organization and information hierarchies that a layperson shouldn’t reasonably be expected to know. Through usability testing you’ll be able to quickly identify issues that would otherwise cause the majority of your product’s users to falter.

Desktop testing with on-screen task prompts and observers capturing feedback.
Desktop testing with on-screen task prompts and observers capturing feedback.

Structured testing enhances the confidence we have in our findings and allows for an efficient workflow. By creating a script of specific tasks and repeating the same testing/feedback protocol with numerous people, you’ll quickly see trends emerge that clearly illustrate challenges users face when trying find important information and complete important tasks. Catching these potential issues before development work begins in earnest saves resources (both time and money) for your project team and avoids wasted time, frustration and unintended outcomes for users.

Hopefully you’ve enjoyed this introduction to usability testing and are interested in learning how to take advantage of it with your own work. Next time we’ll dive into the specifics of what’s required for your testing environment and how to prepare an effective script of testing tasks.

As always, please contact us to discuss this topic further or ask any questions you have.

 


 

Usability Testing, Part 1: “What is it?” and “Why do it?”

This post begins a series of posts discussing usability testing: first the “what and why” of usability testing, followed by how to prepare and conduct usability tests, then finally how to capture observations and apply the important insights gleaned from usability testing.

Mobile testing that records both the test participant and her screen interactions.
Mobile testing that records both the test participant and her screen interactions.

Usability testing assesses if your product (work-in-progress or existing) will accomplish what it was set out to do in a manner that the person using it deems successful and enjoyable. The technique asks people who are or would be the real users of a product to navigate task-based scenarios representing real-world use cases of a product. This testing is administered, observed and discussed in a controlled environment, with subsequent insights captured as potential improvements to the product.

Real-world users (often students, given our U of T context) have the innate ability to identify the oversights in product design. It is important to be aware that our professional experience as U of T staff and faculty often biases us towards a fluency in jargon, business processes, institutional policies, business organization and information hierarchies that a layperson shouldn’t reasonably be expected to know. Through usability testing you’ll be able to quickly identify issues that would otherwise cause the majority of your product’s users to falter.

Desktop testing with on-screen task prompts and observers capturing feedback.
Desktop testing with on-screen task prompts and observers capturing feedback.

Structured testing enhances the confidence we have in our findings and allows for an efficient workflow. By creating a script of specific tasks and repeating the same testing/feedback protocol with numerous people, you’ll quickly see trends emerge that clearly illustrate challenges users face when trying find important information and complete important tasks. Catching these potential issues before development work begins in earnest saves resources (both time and money) for your project team and avoids wasted time, frustration and unintended outcomes for users.

Hopefully you’ve enjoyed this introduction to usability testing and are interested in learning how to take advantage of it with your own work. Next time we’ll dive into the specifics of what’s required for your testing environment and how to prepare an effective script of testing tasks.

As always, please contact us to discuss this topic further or ask any questions you have.

 


 

Usability Testing, Part 1: “What is it?” and “Why do it?”

This post begins a series of posts discussing usability testing: first the “what and why” of usability testing, followed by how to prepare and conduct usability tests, then finally how to capture observations and apply the important insights gleaned from usability testing.

Mobile testing that records both the test participant and her screen interactions.
Mobile testing that records both the test participant and her screen interactions.

Usability testing assesses if your product (work-in-progress or existing) will accomplish what it was set out to do in a manner that the person using it deems successful and enjoyable. The technique asks people who are or would be the real users of a product to navigate task-based scenarios representing real-world use cases of a product. This testing is administered, observed and discussed in a controlled environment, with subsequent insights captured as potential improvements to the product.

Real-world users (often students, given our U of T context) have the innate ability to identify the oversights in product design. It is important to be aware that our professional experience as U of T staff and faculty often biases us towards a fluency in jargon, business processes, institutional policies, business organization and information hierarchies that a layperson shouldn’t reasonably be expected to know. Through usability testing you’ll be able to quickly identify issues that would otherwise cause the majority of your product’s users to falter.

Desktop testing with on-screen task prompts and observers capturing feedback.
Desktop testing with on-screen task prompts and observers capturing feedback.

Structured testing enhances the confidence we have in our findings and allows for an efficient workflow. By creating a script of specific tasks and repeating the same testing/feedback protocol with numerous people, you’ll quickly see trends emerge that clearly illustrate challenges users face when trying find important information and complete important tasks. Catching these potential issues before development work begins in earnest saves resources (both time and money) for your project team and avoids wasted time, frustration and unintended outcomes for users.

Hopefully you’ve enjoyed this introduction to usability testing and are interested in learning how to take advantage of it with your own work. Next time we’ll dive into the specifics of what’s required for your testing environment and how to prepare an effective script of testing tasks.

As always, please contact us to discuss this topic further or ask any questions you have.

 


 

Staff consultation: Tapping into a wealth of knowledge

three_campusesIn a recent post we discussed formative research from a student perspective and how this is a key input for our user-centred methodology. Similarly to how we gather and synthesize knowledge directly from students, we do the same with our staff and administrative colleagues. In this post we’ll explore our research and consultation efforts that involve staff and faculty perspectives.

To begin with, here are a few stats from the ACORN project. In the last year we have:

  • Held over 60 meetings and presentations
  • Met with 200+ staff and faculty members
  • Had representation from over 100 departments, divisions and/or units

books_copy

 

 

This is in addition to the students we have interviewed, surveyed and had test our design prototypes.

Why does the NGSIS team do such an intensive consultation? What’s the point of reaching out to all these people and holding all these meetings and presentations? Is all this time spent worth it?

The answer is this: Our user-centred approach depends on the sustained involvement of our community. We can’t build great products for our users if we are working in a vacuum.

 

 

For our project teams, this stakeholder involvement yields three critical benefits:

  • First, stakeholders point out potential pitfalls and opportunities that we may not be aware of and would have very little chance of encountering on our own.
  • Second, they help us deepen our understanding of existing frustrations, pain points and processes. We start with a high-level understanding of a topic and with the help of our stakeholders dive deep into the core of an issue. The richness of institutional knowledge and experience in our vast stakeholder group is irreplaceable.
  • Lastly, once we’ve completed fleshing out our vision and begun building a product, our stakeholders help validate our applied understanding through iterative guidance and testing of the solution we create.

formative_research_stageThere is a wonderful symbiosis between the project team and our stakeholders. To create a quality product we need to leverage as much knowledge and understanding as possible from those around us. They are integral to our user-centred approach. The investment in time to gain these insights pays off in droves.

As this blog progresses we’ll continue to explore this topic of stakeholder involvement. Stay tuned!

 

 

 

Learning What Works: A Student Perspective on ACORN & User Research

The following blog entry is written by Laura Klamot, NGSIS work-study student extraordinaire and User Research Coordinator. Periodically we’ll feature Laura’s reflections as a student working with the UX team.

Going into my third year of university at UTM got me thinking—and stressing, more often than not—about what comes after. The fear of being handed that degree and then finding a job. Students are told that you can’t just finish college or university and trade in that piece of paper for a career. They’re told that a career has to be built and it has to be earned. But it’s daunting when you’re plagued with stories of being thrown into the “real world” after having been a student for so long and potentially having a part-time job in a field unrelated to your career path of choice.

I’ve had two jobs during my academic career. While I was always glad I had somewhere to clock in at all, both occupations involved selling baked goods, and I don’t want to be a baker. After leaving the first for the second, I got around to thinking that to make my university experience and that whopping deregulated tuition fee on ROSI worthwhile, it would help to look for work related to what I’m studying.

I knew a few of the employment resources that U of T makes available to students, one being the Career Learning Network, which I found after browsing UTM’s Career Centre website. It was here among the long list of employment and volunteer postings that I found, applied for and eventually accepted the position of User Research Coordinator at the St. George campus for the NGSIS Program User Experience Team. They needed someone to help them gain student insight surrounding the design of the ACORN project, a front-end overhaul of the current ROSI system that U of T students use to enrol in courses, check grades and order transcripts, among many other functions.

I was excited to see that there was a position I could see myself enjoying, and could learn from to strengthen the knowledge I was gaining in school. Better yet, as delicious as working at a bakery was, I wouldn’t have to sell any cupcakes here.

I’ve learned about problems ingrained in ROSI that I never knew about, never experienced myself, or simply accepted because that was just the way things were. I was ignorant of the fact that users—students like myself—shouldn’t have to work hard to adapt to a service that exists to facilitate their education, and not every student has the same experience using it as I do. This was brought to my attention further after reaching out to students and bringing them into the UX (User Experience) lab to take part in usability tests. One of the things that all U of T students have in common is that unless our program’s department uses another enrolment system, we have to use ROSI. Due to this, many students are excited to see the changes that are being introduced, initially in the form of functional prototypes that they can try out themselves. I schedule usability tests, students come in, they follow a set of scripted tasks while being recorded, we ask questions and they give feedback. We learn a lot from these interactions. What did they love about this version? What did they hate? Why on earth is that thing over there and what does that text mean? The test participants I’ve been in contact with come from an assortment of academic backgrounds and each bring their own unique experiences and personalities to each usability test. It’s becoming more clear that designing a service for the needs of a “typical U of T student” isn’t a simple task, because there is no typical U of T student.

With the help of the members of the UX team, I’ve learned to ask better questions and appreciate the value of having people come into the lab and give their honest opinion at each stage of ACORN’s design process. It’s a complicated system with complicated workflows and lots of work still to be done, but I like to think that the small part I play will make a difference in how students, including myself, will view what comes out at the end of this process.