Skip to main content Skip to navigation
Hydrogen Properties for Energy Research (HYPER) Lab Teaching

How to write a resume

Want to see my resume?

You’re looking at it.

Here’s why.

This too.


Want to know what I’ve done with, well, anything?

Use the search bar.

When’s the last time you saw a resume with a search bar?



But that’s not why you’re here.

You’re here because they expect you to have one.

They don’t expect you to have a searchable body of work yet.

Wouldn’t that be a fun surprise?


What’s that you say?

They don’t want to see your body of work?

Then what are they hiring you for?


Indicators of performance like GPA, merit badges, gold stars?

Best use of buzzwords?

Software proficiency?

Then what you seek is the baseline, the bar, the minimum.

And that’s ok — play they best hand you’ve got.

Try these templates for starters: ProPEL B.T. Cougar Resume (ME), ProPEL Resume Guidelines.


But they really should be hiring you based on evidence.

Which was why you joined a club or research lab.

So you could amass evidence of freakishly awesome engineering achievement.

Told through stories that apply your engineering skills in context.

If you have that evidence, then why are you still presenting it in the same way as those who don’t?


Here are the resumes of my first student, Ron (Matt) Bliesner: Resume Example

Yes he had two.

The first was for the engineer that became his boss.

The second was for a machine.


A resume says a lot about you.

Agile versus Waterfall for the big one

A waterfall of pages (Commons)

Two of my PhD students are in the middle of writing their theses/dissertations. No surprise, they missed the awesome seminar by Lean/Agile software pioneer Ryan Martens yesterday. During the seminar Ryan brought up a classic image representing the engineer’s design-build-test progression. The point is to illustrate when and how many times you have to learn in the traditional waterfall engineering design-build-test progress: once, at the end, when you usually don’t have time to revise. The Lean/agile approach to design necessitates that you test (and learn!) about something as quickly as possible. Sometimes you even write the test specification before you begin designing!

Waterfall versus Agile DBT

At this point, Dr. Chuck turned to the class and brought up one of my favorite points for my graduate students: the key to success with your thesis is to write (build) and test (send out for review) as frequently and as often as possible like the lean/agile way. While this may seem obvious to some, it’s not to others.

One of my good friends brought up the point, “But when you are writing and testing so many times you loose the consistency of “voice” and your writing becomes disconnected. So I prefer the single, sit-down, and write it out approach.” The approach my two PhD students, despite the warnings, are currently attempting like the majority of their peers around the US this spring. So really, which approach should you take and is agile/lean really any better?

What I tell all of my students is that writing a thesis or dissertation is like running a marathon. It’s up to you to run it in little bits over the course of the next several years, or in one big sprint at the end. You’ve got to get it done. I just know that most people can’t run a marathon cold without conditioning and the last thing we need is someone getting hurt.

Writing a thesis or dissertation is a heavy lift — one of the biggest of your life — and most people wouldn’t go to the gym after sitting in a lab for several years and immediately put their weight onto the squat rack. Too many injuries happen that way.

When is the last time you dove into a big important anything without practicing and preparing first? When is the last time you tested a prototype before swinging for the fence? I’ve written previously about how this has big ramifications on the University system where we almost always choose traditional waterfall. Go to a high performance organization within the university, a performing artist, a chef, a lead grants person, and ask them to perform with a totally new instrument — see how it goes.

And yes, many little bits takes more warm up and cool down periods with greater variation in performance. But if you want to get good, really good, it’s going to take a lot of practice. It’ll take the constant drip, drip, drip of building momentum over years to eventually become a torrent you can master. And may’be someday, after you’ve written and published daily for years, you’ll be able to run that marathon with some agility.

You really didn’t think this blog was just for you did you?

Lessons from the performing arts: UI jazz choir

This is likely to be the first in a series where I sit in on the very highest performing educational environments on the Palouse. Characterized by students that perform at the highest levels the actual profession they come to the university for. The goal of my visits is to distill the common themes, and heuristics for developing high performance professionals ready to contribute to society.

I came to watch Dan Bukvich lead the UI jazz choir. Dan is a longtime friend from my younger days at the UI. Once as a junior, on a hunch, he invited me into his musical composition class to give demos on engineering education. I really suspect it was to teach me a lesson — partly that I had know idea what I was doing and was out of my league — which was true.

If you’ve ever attended a UI Jazz Choir performance (they open the Lionel Hampton Jazz Fest this year), Dancers Drummers Dreamers, or a UI jazz percussion performance, you’ve seen Dan’s work and know it is immediately self-evident of educational mastery. The challenge for me is to steal as much as I can and hope that by the time I’m 70 I get engineers to where Dan’s groups are performing. After all, I suppose it is called the performing arts for a reason…

Notes from Day 1

The first day of the semester I immediately noticed differences from engineering.

1) Class is held in a cavernous room. The arrangement is choral/systemic on the Spiral vMeme scale (damn that spiral v-Meme structure stuff really works, see diagrams I & G below, with a little bit of F, and the pic from the classroom). Everyone is standing, moving, and energetic. We’re totally missing this opportunity in engineering. Never seen the structure.

2) Dan says to me at the onset: “the worse thing I could do is hand out a syllabus and send everyone home. Or hand out new music.” (Reminds me of Chuck Pezeshki’s capstone class) This is the first day, everyone starts singing a Dan original composition “Pomello Monkey Fruit Banana” immediately within the first 2 minutes. A song the majority already know. 90% of the group returned from last semester. I can see new people in the backs looking around and trying to figure out what they need to do. Veterans offer quick pointers. A lot of the best helpers in here are not music majors, they can’t even read the music! Some have been coming to this for over 30 years. No auditions, everyone can come, but those not keeping up will be prompted later.

3) Then comes the warm-up. Dan says: “someone in your section will teach you this warm up.” And they begin, following his commands.

4) All of the songs are preordered and setup in their stands. This allows variance while maintaining repition of the basics. Everyone is still standing. Only pause between songs is to establish the beat, go go go. “next” and everyone moves to their new position. New music handed out mid song. All stands are shared.

6) Dan calls next song “percussionists sing if you don’t mind”. Off key, stops, starts conducting in 2s, sizzle your line. Starts counting, has them count their part and clap their transitions. Analogous to having someone switch tools but perform same procedure. Stomps on piano note that is required if people are off.

“Is it possible to learn this in 9 rehearsals, yes, will I force you? I don’t know yet. Look at 23, ask your friends to help you.”

“The worse mistake you could make is not wanting to make a mistake. You have to go for it.”

Slows beat down and controls progression through notes.

“You’ve got 1 minute to memorize the soulfish.” Go.

“Turn around” do it again they can’t see the music this way, they all start keeping their own count.

Switch to a song they know to end with momentum.

They are having fun.. so are the new people who are trying to figure out as much as possible.

“Please take a minute and talk through the moves with your section.” Goes around and asks everyone how their break was.

Dan says to me — ” I don’t even know the moves they are teaching each other right now, they are making it up.”

Take Aways:

I have never whitnessed a university group engineer something that is reconfigurable enough to allow a high performance community to work with it like the Jazz Choir. Our workspace could be the first.

This was fun for everyone. Where is the fun in engineering? The key to the future is finding the fun in engineering, then not killing it. We have so, so far to go to realize a high performance community functioning at the I & G vMemes.

Scaffolding Growth of Agency in Engineering Design

The NSF currently has open programs for Research in the Formation of Engineers. With the primary emphasis on:

  • Introductions to the profession at any age;
  • Acquisition of deep technical and professional skills, knowledge, and abilities in both formal and informal settings/domains;
  • Development of outlooks, perspectives, ways of thinking, knowing, and doing;
  • Development of identity as an engineer and its intersection with other identities; and
  • Acculturation to the profession, its standards, and norms.

Scaffolding Growth of Agency within Engineering Design is, in laymen terms, what and how we teach engineering design in order for students to master the empathetic connections necessary to confidently contribute to client and community. This is THE problem of education: How to weave together multiple knowledge structures into an empathetic scaffolding to most efficiently achieve mastery of personal agency?

The Structure/Framework/Scaffolding

Any effective structure must span a taxonomy of Spiral value Memes and associated knowledge structures. In short, competencies within the following general knowledge/value structures should be developed:

  1. Authority: A student is given authority and responsibility over a defined area contributing to an end client/community need.
  2. Legalistic: A student must thoroughly research and document the vocabulary, rules, and laws required to know where a new contribution could exist within the area of need (analogous to the introduction, literature review, and theory sections to a paper, report, or thesis).
  3. Performance: A student should show a level of understanding and ability to perform with the established techniques in the area. Then develop a heuristic/design/process to continually perform. This is the typical advanced goal and terminal end to a students learning.
  4. Community: A student needs to be connected to the broader community that needs the contribution. A connection/empathy to the end stakeholders and group cohort is needed to ensure the transition into the workforce and sustain resource flows.
  5. Systemic: Through repeated repetition of these levels, the student now understands the complexity of personal development. They now take ownership by contributing to furthering/continuously improving the system by developing their own scaffolded learning materials and demos for others to follow.

Moreover, a valid framework must map these to the complexities of the design environment — an inherently flexible heuristic driven environment. The design environment is structured to address the unknown-unknowns so commonly accounted in the design process. This necessitate a heuristic “zone defense” approach as opposed to algorithmic “man-on-man defense”. This zone approach is known as the Jigsaw classroom method. A list of the roles could include:

  1. Builder: Fabrication from the raw materials and components into the final product. A writeup of the builder role is here.
  2. Reporter: Communication on many levels what is happening and why. A writeup of the reporter role is here.
  3. Process “Pro”: Information guru who is experienced with the codes and standards within an area and adept at developing processes for executing them. A writeup of the process pro is here.
  4. Theorist: Theoretically minded and skilled with implementing and analyzing the physical arts through mathematics. A writeup of the theorist role is here.
  5. Liaison: Teamwork mastery polymath that can fill in improvise in the above roles and is aware to spot when the team is stuck in a paradigm or has unknowns building to a critical level. A writeup on the role of the liaison is here.

Here is a matrix intersecting the knowledge paradigms with these design team roles into a scaffold/framework that provides example learning modules within each intersection:

Authority (1) Legalistic (2) Performance (3) Community (4) Systemic (5)
Builder (B) Workspace Safety & basic operation Lean & multiple certification Facilitating team build process 5S/Kaizen lead fab developer
Reporter (R) Personal Labbook/webspace Technical writing & grammar Skilled in Persuasive writing & presenting Shared Webspace + forums & papers Lead Webspace com. manager
Pro (P) Library standard Information literacy Multiple heuristic code interpretation Community standards group Continuous Info improvement lead
Theorist (T) Textbook Most eng. courses Thesis delivery + improvisation Group tutor/mentor Tutor/area steward
Liaison (L) Teammates PMBOK Real resources and project delivery Multi-cycle Team stewardship Steward leadership pipe
Year Sophomore Junior Senior Grad 1 Grad 2


Notice how the levels of attainment naturally associate to years of collegiate education culminating in a masters degree. While the end goal is mastery within each of the Jigsaw role areas, time and resource constraints likely require specialization determined by the individual. I recently wrote about our ongoing shift towards a free-form Montessori-themed environment for research in the lab, the free-form Montessori environment allows natural load-leveling among these tasks and roles within a group. This is true not only for the research lab, but the classroom, and student club environments.

How to study efficacy of the framework/scaffold?

Each level of attainment corresponds to a different Spiral vMeme value set. Vocabulary and approach to structuring information changes within each of these levels. By simply surveying student written communications and how they are structuring the information, a distribution or weighting among the levels can be deduced. The goal of which being to continually increase the weightings of the distribution to the right.

The remaining question focuses on which roles to study through this progression that will add the most value to our immediate community given the limited available resources. The area of teamwork/leadership is well covered by nearly all disciplines. Theory is the focus of the traditional engineering curriculum. The Information role is related to the libraries and has potential, although is more difficult for traditional students to grasp without a work environment or relevant task. This leaves fabrication and communication. Coupling the two allows the communication development to be easily analyzed to asses progress along the fabrication spectrum.

WSU has a system for giving students personal wordpress webpages (similar spine format to this site). It is possible to use text analysis software to then gather data within each of the personal webpages and track the students as they move into collaborative information environments (such as this site), where it is still possible to track what changes a person makes and how they make them.

WSU’s recently developed Cougar LEAN (CLEAN) workbench is an example of an empathy-scaffolding workspace that will build to the upper levels of fabrication. Open questions remain though about whether constructing your own workspace offers advantages compared to granting of existing workspace or no space.

Where we are likely to see the real increase in personal agency, where things takeoff in non-linear fashion, is at the community-systemic levels. Students need to be a part of a high-performance system bigger than any of them alone. This system needs to sustain multiple development levels through multiple years of development. If we achieve this, the results of the study will be immediately obvious. Like a described in my TED-x talk and have seen in nearly every high-performance environment — we’ll know it’s working — the products will speak for themselves.


Montessori, Empathy, and Making Engineers — It’s about the MEMEs

The Montessori method is legendary for childhood development. Many engineers are familiar with the system and often enroll their children within the preschool system due to a reputation for developing science and mathematical skills. What many do not know is that Maria Montessori classified and wrote about the development of 18-24 year olds in the University. I recently completed a search and could not find ANY articles linking the Montessori Method to engineering education. So where did the disconnect emerge and how can we re-envision an engineering education to infuse the magic of the Montessori Method?

I’ll first review the Montessori Method, connect this to modern Lean Manufacturing philosophies (the Toyota Way), and finish with benchmarking provided by our Empathy/Spiral vMemes knowledge aggregation theory. This will result in evolved pedagogical scaffolding techniques for design classes and research laboratories.

The Montessori Method

“My vision of the future is no longer of people taking exams and proceeding on that certification from the secondary school to the university, but of individuals passing from one stage of independence to a higher, by means of their own activity, through their own effort of will, which constitutes the inner evolution of the individual.” Preface to From Childhood to Adolescence by Maria Montessori, 1948.

The Montessori method, pioneered by Maria Montessori from 1897 through the 1950’s in Italy, is one of the most extensively studied educational systems in human history. The system “had the largest positive effects on achievement of all programs evaluated” in a review of pedagogical methods and especially outperformed other programs in the areas of mathematics and science. If the study data is not enough, how about a personal anecdote: I attended a Montessori preschool, so did Sergey Brin, Larry Page, and Jeff Bezos — just sayin’.

Many are familiar with the system. But here’s a quick review of the basics from the Wiki:

  • Mixed age classrooms, with classrooms for children ages 2½ or 3 to 6 years old are by far the most common
  • Student choice of activity from within a prescribed range of options
  • Uninterrupted blocks of work time, ideally three hours
  • A discovery model, where students learn concepts from working with materials, rather than by direct instruction
  • Specialized educational materials developed by Montessori and her collaborators
  • Freedom of movement within the classroom

Other characteristics include a high degree of order, children work on mats to keep themselves contained, everything in the classroom has a place, and all of the children actively clean up after themselves to maintain the classroom. Maria believed that the children would naturally play with whatever was most interesting to them at the time. This immediately empowers the child to decide if something is over their head or not, and allows them to self optimize to the classroom environment. This of course requires a highly developed system of scaffolded learning exercises. Here’s an image of the infamous “Golden Beads” exercise for developing counting and spatial understanding:

Creative Commons

“The successive levels of education must correspond to the successive personalities of the child.” Opening sentence of From Childhood to Adolescence by Maria Montessori.

Montessori had a continuous vision of human development along a spectrum from age 0-24. She divided this spectrum into ‘planes’ ages 0-6, 6-12, 12-18, and 18-24. Each of these planes corresponds to a phase of human development. Of these planes, she wrote the least about the 18-24 plane. She envisioned that children were fully developed by this point and ready to begin building families and contributing to their communities. But this is also the phase most relevant to us as this is where most students begin studying engineering within a university. From Childhood to Adolescence contains an appendix titled, “The Function of the University” that is highly critical:

“The desire to work as little as possible, to pass the exams at all costs, and to obtain the diploma that will serve each person’s individual interests has become the essential motive common to the students. Thus academic institutions have become decadent as the progress of culture has transformed man’s existence. True centers of progress have been established in the laboratories of the scientific researchers. They are closed places, foreign to the common culture. The general decadence of the schools noted in our day does not come from a lessening of the instruction given to the students but from a lack of concordance between the organization of the schools and today’s needs. The material bases of civilization have changed to the point where they announce the beginning of a new civilization. In this critical period of human history, the very life of men needs to adapt afresh. And it is here that the problem of education is to be found.”

Read it twice if needed. An incredibly insightful assessment of the problems we still face today! This is also the likely point of disconnect where Maria Montessori lost most University faculty. Remember that was said in 1948, flash forward 70 years, look around you, and realize these problems are artifacts of the system structure of universities and will not naturally work themselves away. Her point, “pass the exams at all costs,” emphasizes a philosophical shortfall of University systems that rely almost complete on examination based inspection for quality control. Momtessori realized nearly 50 years before the Lean Manufacturing movement that the other valid approach to improving quality control is error-proofing the production  process — then needs for inspection are dramatically reduced or eliminated entirely. When faculty are confronted with a highly critical opinion like this, a teaching method that is already somewhat alien to the status quo, and her assessment that most of the problems facing the university stem from failure of the 0-18 year education, it’s all too easy to check out and move on to someone with more accessible solutions. However if you kept reading, in her vision for a new University, Montessori states:

“Today it is not by philosophy, not by discussion of metaphysical concepts, that the morals of mankind can be raised. It is by activity, by experience, and by action… All the points noted put the finger on the impossibility of enclosing education within the limits of a room where the individual at work is inert, perpetually dependent on the teacher, separated from the rest of mankind. This is true even for small children… all facilities ought to be provided to create some form of work that may permit the students to get a start toward economic independence, so that they may be entirely free to study and able to find their true position according to their just value.”

You see that Montessori was already onto the experiential learning movement that has been popular over the last several decades in engineering education. Moreover, when you remember her empathy to autonomy and economic freedom of the students, coupled with empathy to the community constituents of the University, realize that she is onto an entirely new level of education than we’re use to.

To summarize, the core tenants of the Montessori preschool system remain valid for the University. Extending these principles, Maria Montessori envisioned a University with close connection to the needs of the current culture and community constituents. One where resources and facilities were immediately available to empower students with time to think and practice solving the problems that allow them to contribute to their society. She repeatedly emphasized that this is not possible within the traditional classroom, but likely within the scientific research lab, should they be better connected to community constituents.

Transferring Montessori’s Methods to Engineering Education

After reading Maria Montessori’s book. I’m looking back on several of my pieces on engineering education over the last few years:

I’m not far from the future University that Montessori envisioned, perhaps due to her system’s influence on my pre-K years. In reality, the educational environment created by Dr. Chuck in his capstone design class is the closest I’ve seen to this, barring changes to order and further systematization of workspaces. Regardless, we have the pieces to the puzzle, we just need to put them together and SHOW everyone how incredible it can be. The question I’ve been working on for years is how?

About a month ago my good friend and mentor P.K. Northcutt was debating with me over lunch about the key differences between an education in the performing arts and engineering. Over the years we have noticed a decisive difference in the caliber of confidence and professionalism between the engineering and performing arts disciplines. I presented a hypothesis that was new to him: the key difference between performing arts and engineering educations is the amount of resources (time, money, energy) it takes to perform. In the performing arts you can improvise a performance on the spot. You can design a dinner dish in the morning and test it by evening; in fact you have to eat. With engineering though the amount of money and time it takes to complete the Design-Build-Test cycle can be 4 months to a year or more and thousands of dollars in investment. How is it then possible, or reasonable to expect an engineer to perform at the level of personal confidence and mastery as those graduating from the performing arts? Because it is in completing this Design-Build-Test cycle that engineers gain the activity, experience, and action advocated by Montessori. The more actual engineering the students get that is relevant to the needs of themselves and our community, the better we will all be. Hence we arrive at our goal:

Create an engineering education system where students continuously improve performance with the design-build-test cycle.

We will achieve this through completing the following objectives:

  1. Design a continuous flow for the design-build-test progression– After completing practice tours of my lab with P.K. we realized the need to restructure our very concept and flow of tours. Lean manufacturing and the Toyota Way emphasize the need for continuous flow, pull through production processes, regardless of what is being produced. We will now start our tours in our TFRB 108 Design Space, where we will design the tour in real time that is customized to our specific client. We will then move through our evolving Build space that could be upgraded soon with basic machining equipment. We will then progress into our established hydrogen+cryogenic testing areas in ETRL 221 or on the Quanset hut pads next to TFRB. The flow process is designed to help everyone understand the proper place and progression for all things in the lab system. I’m also developing a smaller version of this system, in the form of quickly reconfigurable assembly lines, to teach in the EEME 154 Systems Design classroom.
  2. Build a standardized workspace system for the labs and classrooms– Over the last semester I’ve worked with ME students Ryan Pitzer, Austin Rapp, and Jake Enslow to develop the Cougar LEAN (CLEAN) workbench system. A presentation on their work is here: Cougar Lean Workbench System. The entire top of this workbench is a jigging surface for safely fixturing work for modifications. The CLEAN bench is directly analogous to Maria Montessori’s mat/tray system to keep a child’s workspace contained. Here’s an image of our first prototype bench, we’ll have about 4 in my lab space and 8 in the classroom:
  3. Test integrating the system into the lab and ME coursework: This spring we will have lab work sessions where teams quickly design, build, and test components for this system. Many of the early days will be spent simply building out our CLEAN bench systems so we have space and materials to build new products. In the 415 Systems Design class we are building 8 CLEAN bench systems and establishing the system there as well. We’ll learn considerable information in these tests. We have a new SDEX website to accompany the HYPER lab site to disseminate this information.
  4. Utilize our Design-Build-Test sequence to develop new learning materials and systems: The Montessori System took decades of community effort to develop and refine all of the scaffolded learning/work exercises. We have specific needs to develop materials that help improve our knowledge of a) cryogenic materials, b) high vacuum systems, c) flammable gas plumbing and safety, and many more. These will allow students in the lab the freedom to utilize the learning modules they need, at the optimal time, based on the needs of the projects they are working on for our community constituents. This rekindles the magic of individual empathy and autonomy within the Montessori Method.
Memes, and What this means for HYPER lab and WSU ME students

At first it will seem foreign, even alien, eventually it will become standard, expected, and a source of pride. The Montessori method looks totally different from traditional pedagogical approaches because IT IS and the majority of our students are not prepared. This is the core of the disconnect and misunderstanding of Dr. Chuck’s design method by both students and other faculty. All of the values, specifically empathy to students and clients, are highly performance-community-systemic on the Spiral vMeme taxonomy. This is 1-2 levels removed from our traditional authoritarian-legalistic classes and is a challenge for most to understand within the traditional system.

Even with the brilliant students I’ve had in my lab, I’ve been frustrated by how much I have to drive community, contribution to the lab system, and performance/drive to finish projects. It is simply a result of them coming from an authoritarian-legalistic system without the scaffolding in the lab for the performance/community/systemic memes.

Our adaptation of the Montessori Method into the University, along with completing the lab’s transition to a Lean production system, leads to a totally new scaffold of expectations for student development and achievement. No longer is a thesis/product the terminal end, or indicator of mastery of learning. Rather, we will have a layered system of mastery directly connected to the vMeme taxonomy. These levels leads to the following progression through the lab:

  1. Authority: A student is given authority and responsibility over a defined area contributing to an end client/community need.
  2. Legal: A student must thoroughly research and document the vocabulary, rules, and laws required to know where a new contribution could exist within the area of need (analogous to the introduction, literature review, and theory sections to a paper, report, or thesis).
  3. Performance: A student should show a level of understanding and ability to perform with the established techniques in their area. Then develop a heuristic/design/hypothesis to develop their new contribution. This is the typical advanced goal and terminal end to a students learning.
  4. Community: A student needs to be connected to the broader community that needs the contribution. This comes through many forms, including paper publication, outreach, and simply just communication with end-users. A connection/empathy to the end stakeholders and lab cohort is needed to ensure the transition into the workforce and sustain resource flows into the lab. The student needs to demonstrate performance within a lab cluster or team of 4-5 people.
  5. Systemic: Through repeated repetition of these levels enabled by our Design-Build-Test flow, the student now understands the complexity of personal development. They now take ownership by contributing to furthering/continuously improving the system by developing their own scaffolded learning materials and demos for others to follow.

All of my indicators say that the real product we produce here in the University is not the research, but rather the students and people with the ability to learn that continually contribute to our community over the course of their lives. This is doubling down on the long term investment. We’ll risk short term gains, and miss a few grant opportunities with short year-long timescales, but could win-out on the timescale of decades because we will have built a sustaining community and system around the quality of our people. Bezos, Brin, and Page were no accident. The irony is that they, through Amazon and Google, may be the only ones with enough resources remaining to take this long-term approach to research.





ME 406 Lesson 7: Visualizing your Results

We’re now to the point where we’ve taken measurements and analyzed our confidence and uncertainties. One of the most rewarding parts of experimental investigations is graphing and visualizing your hard work. Usually this will appear in the Results and Conclusions section of your report.

5. Results and Conclusions

What would someone take away from your report if they only read the introduction and skimmed ahead to the results and conclusions? Make sure you don’t let them miss your most important points and findings! When you skim through something what do you look for? Odds are headings and images.

Why the images?

They’re worth a thousand words… 900 of which are irrelevant. But what do they really tell us?

Comparing with visuals

When you look at the above graph, why have I chosen two y-axes to present both the heat capacities of hydrogen and the equilibrium fraction of spinning vs. non-spinning hydrogen at cryogenic temperatures? Because they are related. Graphs are great at showing how sometimes complex information is related, and changes, relative to another. Graphs compare and contrast.

Many will say that figures are a matter of taste and preference. That’s true to a point. However, you’ll find that good graphics will consistently do similar things. Here’s a few examples:

  1. What not to do: University of Wisconsin-Madison Professor Rod Lake’s abominable graphs website.
  2. What to do: here’s a very old standard on graphic design from 1914, but is still very relevant.

In general, the guidelines are pretty clear:

  1. Be empathetic to your reader.
  2. Focus on the relevant change or correlation you want the graphic to display.
  3. Give the titles, units, and digits when and where they are needed.
  4. Omit useless and repetitive words, digits, and space.

But you’ll need to practice to get good at it. The example above was of theoretical curves. Notice that they are smooth. The uncertainties in the predictions likely fall within the line-width. Experimental measurements are seldom this easy. The following figure shows some raw measurement traces for when I measured the visco-plastic flow of solid deuterium. You can see in the figures when steady state was achieved at a given temperature, dynamic shear strength, and heat transfer measurement. At these steady state points I can average to obtain representative points.

Raw experimental measurements

Notice that I use the figure caption to describe what is displayed and do not repeat the information with a title above the graph. These graphs are all made with EES. The EES default graph is very clean and tends to work in most situations. Even for very large datasets like the one above. Plotting raw data like this can give people incredible confidence in your experiment. Can you estimate the precision error from the data traces? Does it look like I achieved steady state before moving to another data point?

With these raw datasets it’s straightforward to average the values to return points you want to report. Here’s an example with all of the points I measured for my Ph.D. dissertation.

Comparison plot

This single graph contains a considerable amount of information. This is a graph you want to close with. This one contains the visco-plastic flow measurements for hydrogen, deuterium, and neon. Can you compare and contrast? I even include the prior measurements, some with estimated uncertainty bars. Do the bars add important information when comparing to the correlations I developed (colored lines)?

In the end, spend time on effective visuals. The narrative that explains them is only necessary when your graphs are not completely obvious alone. Once finished, ask yourself what a person will take away from it. Is it what you want them to leave the report with? It should be. Make sure they can’t miss it.

From here, it’s easy to create bulleted lists summarizing the key findings from your report. With all of the key portions of your reports and presentations covered, it’s time to revise, polish, and practice!

ME 406 Lesson 6: Data Analysis and Uncertainty Propagation

Now that we’ve covered the design/layout, procedure, and instrument calibration and traceability for our experiment, it’s time to start analyzing our data.

Section 4.3: Data Analysis and Uncertainties

The goals for this section of your report are

  1. Show us what happens to a raw data point prior to being reported, such that the raw data can be analyzed by someone, somewhere else.
  2. Show us where uncertainties of the reported values come from (i.e. bias error, precision error, etc.)
  3. Quantify how confident you are in the reported measurements.
  4. Conform to ASTM Standard E2586 – Standard Practice for Calculating and Using Basic Statistics. You can download the pdf for free while on campus.

How do we quantify confidence? This is where we realize the value of our engineering education. We’ve all calculated a standard deviation by now and know that 68% of the measurements fall within ±σ of the mean (a.k.a. a coverage factor of 1), 95% lie within ±2σ (a.k.a. a coverage factor of 2), and 99.7% lie within ±3σ (a.k.a. a coverage factor of 3). While it’s straight forward to calculate the coverage factor for every steady state measurement you take. You should also do a repeatability test at several points to double check. A classic example is that someone states that there uncertainty has a coverage factor of 2 (99.7% of all measurements fall with y of mean), however this is only a repeatability test of the precision error. You still need to propogate instrument uncertainties to determine the bias error.

Precision versus Bias error


This portion of the lesson contains a considerable amount of math as we go through the Root-Sum-Square (RSS) method and what it means for uncertainty propogation. Remember that we can do this math by hand or with EES, which will estimate the uncertainties numerically. Here’s a link to the hand-written notes: lesson-6-lecture-notes.

ME 406 Lesson 5: Instrument Calibration and Traceability

In section 4.1 we created a table of key instruments for our measurements that included columns for instrument, purpose, make, model number, range, and uncertainty. Today we dive into the details of instrument uncertainty and traceability.

Many times knowing the precision/uncertainty of your measurement is just as, if not more, important than the value of the measurement itself. The question is: how can you quantify your confidence in the measurement? Two kinds of error will affect your instruments: Precision error and bias error.

Precision versus Bias error

For the most part, we can resolve precision error with statistics. We’ll cover that next time. So how do we minimize bias error to the greatest extent possible?

Calibration: determining and documenting the deviation of a measuring instruments indication from the conventional ‘true’ value.

Sure we can calibrate our instruments so that we know they are accurate. But how do we know the calibrations are correct? What value is ‘true’ and how can we trust it?

Traceability: process whereby the indication of a measuring instrument (or a material measure) can be compared, in one or more stages, with a national standard for the measurement in question.

In the United States national standards are maintained and improved by the National Institute of Standards and Technology (NIST). NIST may easily be the most under-appreciated federal agency as a NIST standard is used for nearly every custody exchange in our economy. Formerly known as the National Bureau of Standards (NBS), the organization was mandated by congress after the civil war to standardize the exchange of sugar as a commodity. As it turns out, it’s not easy to standardize the purity of white powdery substance for custody exchange. A full history of NIST and the NBS can be found here. I’ve worked with NIST researchers for my entire career, and spent several summers at NIST-Boulder. It’s a wonderful place!

So if NIST keeps the standards, how do our instruments get compared to this standard? Here’s a diagram of traceability levels established by the European Cooperation for Accreditation of Laboratories (EAL-G12):

Traceability pyramid

This traceability pyramid above is a handy way of understanding the problem of traceability. The wider the base, the more instruments in the world that exist in that level of the pyramid and the cheaper, or lower quality the measurement calibration is. For example THE kilogram block that all other kilograms used to be based on, was kept in a safe under very carefully controlled conditions, I call this the very top of the pyramid, or level 0. Level 0 is used to create the primary standards sent around the world for each country’s National Standards, level 1, that are implemented by the national standard body. Level 2 is a reference standard that is created from the primary standard and distributed to accredited calibration laboratories around the country. These in turn create workplace/factory standards, Level 3, that calibrate the instruments used by the workplace/factory for actually producing things at Level 4. Here’s an example of a length measurement standard applied to the traceability pyramid, again from EAL-G12.

Traceability pyramid for length

As you can see, an ultimate micrometer or dial gauge isn’t necessarily kept in a safe in Switzerland to calibrate all other dial gauges like in the case of the kilogram. The actual type of instrument can change depending on standard level (0,1,2,3,4). This makes sense as the level of precision goes up significantly with standard level, in many cases you just need different physical paradigms to achieve this. Here’s a handy graphic showing how in late 2018 the SI governing committee moved away from physical standards and to universal physical constants for realizing the SI unit system. A full description of the redefinition is here.

The New Base SI (Commons)

As you can see, the process of traceability can quickly become complex! As the table above shows, most standards bodies will provide a certificate of traceability to describe this chain of calibration on a single convenient document. Here’s an example of such a certificate of traceability for the microphones in the anechoic chamber:

Microphone Traceability Certificate

There is an incredible amount of information on this document. Key points include the actual instruments, including serial numbers used in the unbroken chain going back to the primary standard. The dates of calibrations, including environmental conditions are provided, along with the calibration curve itself. In the fine print you’ll also see the obligatory, “whose accuracies are traceable to the National Institute of Standards and Technology.” As you can guess, this document is valuable, sometimes more valuable than the instrument itself. Hang onto these in a safe place!

But even with this document things are not fine and dandy. Is it current enough? How do you know the instrument was not abused before you started use? These are complex questions that there isn’t currently a standard for.

A few years ago I proposed a Standards Traceability Index (STI) to go along with the education of traceability. The STI has three numbers: X.XX

X. — This is the standard level (from the pyramid above) for the instrument that you are using (almost everything in our lab is level 4, although the calibration instruments are level 3).

.X — This number describes the status of the instrument’s traceability. For example, 0 is used for a current unbroken chain, 1 for a current but broken chain, 2 for an outdated unbroken chain, 3 for an outdated broken chain, 4 for no certificate chain.

._X — This number describes the status/condition of the instrument. For example, 0 is brand new and performance is validated, 1 is used by performance is validated, 2 is new but non-validated, 3 is used and non-validated, 4 is for unknown condition.

So for example, a brand new platinum resistance thermometer with a NIST traceable calibration that you just dunked in  a liquid nitrogen bath to check would likely have an STI = 4.00.

Using the STI for this class is likely overkill. But if you need to do a very good job, it’s good to think about. In general you should include a statement on the traceability for all your instruments when assessing your measurement confidence/uncertainty. Better yet, check the calibrations once you’re finished with one of our laboratory calibration standards. If you’re uncertain in your results, than how valuable are they?

Traceability is also important for use with property and flow correlations. It’s sadly very common for people to reference a software package as the source of their property data. In reality though the software is just implementing an equation that someone spent a lot of time on. Reference the original equations, not the software. Many people end up embarrassed when they cite some fancy software package, only to find out it’s using the ideal-gas law in a non-ideal situation.

At the minimum, the table of your instruments should include a discussion, specific to each instrument, on the currency and traceability of the calibration, and a justification for the degree it matters.

ME 406 Lesson 4: Experimental Setup and Procedure

Now that we have our motivation for an experiment established (Chapter 1), showed that there is a gap needing to be filled in the literature and standards for doing so (Chapter 2), and have a working model connecting what our client cares about to what we are measuring (Chapter 3), it’s time to start experimenting. Give us an introductory paragraph describing how this experimental chapter is organized. Begin with the following section:

4.1 Experimental Setup

The goals of the experimental section are two:

  1. SHOW that you understand the key components of the experiment and how they work.
  2. SHOW enough information so that the experiment can be repeated by someone else, somewhere else without having to contact you.

Think about the significance of that for a minute. Have you ever heard about an experimental study where the results could not be repeated/validated? What happens to the credibility of the engineers/researchers who published the report? Have you ever heard about an accident or a near miss where a carefully documented experiment and report saved the engineer’s job? (e.g. They had it right in the report, but a technician cut a corner.) This part of the report, just like careful citation of appropriate standards, can be very important to saving you in a lawsuit or litigation. In Chapter 2 you simply state that the standards cover certain areas. This is the part of Chapter 4 where you SHOW that you are following/conforming to the standards.

Here’s a few good ways to show us you understand the components of an experiment:

  1. A table of instruments involved in recording the data. Give us the purpose, make, model #, applicable ranges, and uncertainties.
  2. Diagrams and accompanying pictures of the actual experiment. Potential diagrams include flow, wiring, force, energy, and others. These are commonly called Plumbing and Instrumentation (P&ID) diagrams and are often referred to in incident investigations.

Making good diagrams is a design challenge in itself. One common mistake is to take a picture of your experiment and superimpose numbers/labels over the picture to identify components. This is usually none optimal. There is an old saying, “A picture is worth a thousand words.” I counter that with “Nine hundred of which are irrelevant.” The Google Maps approach to have a layer with just the streets, then another layer to add in the satellite view, is likely the best. For your reports, I recommend having a diagram/schematic view that is accompanied by an actual picture in a similar orientation as your schematic so your reader can quickly go from one to the other.

Our library has a copy of the ANSI/ISA 2009 Instrumentation Symbols and Identification standard. A briefer version is available on the P&ID Wikipedia page. Whatever organization you end up working for is likely to have their own in-house standard protocol for P&ID that you’ll have to follow. There are several software programs, including Inventor, that will automatically make a P&ID for you. In general a P&ID will have the following features: A) Key piping connections and instrument locations/details, and B) Critical safety, control, and shutdown schemes. You should also have a narrative that walks us through the visuals to make sure that we don’t miss the key features/points/takeaways. How you accomplish this is up to you. Some students have done incredible work using Microsoft Visio. Here are a few of the very best examples I’ve seen from over the years:

Roots Blower Schematic

Centrifugal Fan Schematic

So as you can see, this could take some time to do very well. But once you’ve got a visual like this, it serves as a key feature in your reports, actual experimental info, and presentation. I’ve witnessed visuals like this drop jaws — that’s a lot of political capital in your favor if you can do it to your boss.

4.2 Experimental Procedure

This is a start to finish, step-by-step, enumerated description of how the experiment is started, a measurement taken, the procedure from moving from state-point to state-point, and shutdown. Often these test procedures will be given to untrained technicians for data collection so it’s important that you write it such that nobody can get hurt. You’ll want to use your labeled/numbered experimental diagram from section 4.1 as a reference throughout. Brief, informative, friendly, and firm… Relevant, credible, efficient…

A key oversight in this section is to crassly describe how you will proceed from measurement-point to measurement-point without describing how long you will wait to ensure you’ve achieved steady state, or to not describe how much to vary the experiment between measurement points. Now is the time to plan how long each measurement will take, consider how much time you have to complete the report/project, and then budget your measurements accordingly to maximize value. In other words, you should’ve completed a sample measurement run and have plotted the initial measurements (similar to a product prototype). This will give you considerable confidence that you are budgeting an appropriate amount of time for the number of data points you plan to produce.

NASA and other serious research organizations often want a test matrix and accompanying Gantt Chart describing the timeline for project completion. In such a test matrix you would need to provide a rationale for why you are completing every measurement run to ensure that you were not taking too many measurements, or not enough, and are budgeting an appropriate amount of time.

Next time we’ll cover how we actually analyze and interpret the raw data with quantified confidence/uncertainty. But for now, you’re ready to write your proposed test plan. Here’s a prompt and grading rubric:

Technical Memo Proposal Prompt and Rubric


ME 406 Lesson 3: Using Theory to Guide Experiments

In just about every job I’ve been in, people were tempted to label me as either a “theorist” or an “experimentalist”. — Don’t take the bait. It’s easy to fit into a stereotype, tough to break free of them. The very best engineers are competent with both the theory and the experiment. It’s what we call a positive synergy — knowledge of one aids the other.

This brings us to three general guiding principles for the Theory chapter of our reports:

  1. Relevance: Connect the primary motivations/needs/objectives for the experiment (performance, efficiency, foutputs, etc.) to the key variables for the experiment (resistances, potentials, inputs, etc.).
  2. Efficiency: Establish only enough of the theory such that the calculations can be repeated by a fellow engineer without the need to contact you.
  3. Credibility: At the end of your project, your theoretical predictions should agree with your experimental measurements and you should have justifications for limits to and deviations from theory. Engineers have value because we can quantify confidence — 95% of the results fall with ±X of the predicted value.

Related to the first principle, when constructing experiments from scratch it is important to determine your key variables a priori (in advance) of building the experiment. This allows you to focus your time and money on the key variables that matter to the client/customer goal. We don’t have the time or money for open-ended fishing expeditions.

Example: My Ph.D. dissertation was on the modeling of visco-plastic flow of solid hydrogenic fuel within twin-screw extruders for the fueling of fusion energy tokomaks. Nobody had built a machine to solidify and extrude solid hydrogen before. Only the most very basic material properties of hydrogen were available! How do we most efficiently build an experiment that helps us develop theory to model how such a machine will operate? Here’s a visual of my first research poster on this subject:

By building a simple numerical heat exchanger model of the vortex tube and varying the key input parameters/variables, we discovered that the two-phase heat transfer, latent heat, and viscous dissipation were the most sensitive operating parameters. The latent heat was known and the two-phase heat transfer coefficient was not necessarily controllable, so the most important parameter for us to know very well in the model was the viscous dissipation, which we could control through heater input. So we built an experiment to measure viscous dissipation very well, and along the way measured the 2-Phase heat transfer coefficient, among other things.

We had a motivated client with a need, searched the literature to know that the need was a gap in the literature/knowledge, and completed a careful calculation to show that an experiment will help solve the need. Now, before we spent considerable funds on equipment and instruments, we had a very good reason to do the experiment and knew what we needed to measure. So, how do we go about actually doing our theory/analysis?

Here’s a few modeling steps:

  1. If you are still using a calculator, stop. Calculators are typewriters. Use some form of engineering equation processor (EES, Matlab, Excel, StarSolve, etc).
  2. Take whatever units you have and immediately convert everything to base SI (m,s,kg,Pa,etc.). Base SI is the only self-consistent unit system. Read my post – End US Engineering Education of English units for more. It’s a huge loss to our national economy that gets worse every year. You will solve your problems faster, with fewer mistakes in base SI. EES will easily convert and check your units are consistent for you. Convert your final plots into whatever units your clients can best handle.
  3. Conduct an Uncertainty/Sensitivity analysis of your programmed equations. This is possible by simply varying each of the inputs by 10% and watching how much each variation effects the desired output.
  4. Determine key performance metrics for your experiment. Remember how most performance metrics are determined:

System Performance = (What you want)/(What you paid to get it)

Device/Component Performance = (What you got)/(What you could’ve)   a.k.a. (actual)/(ideal)

With these programmed, you can now make performance/design curves on plots that can help you to determine where to take measurements. Once validated, these curves save substantial money and time.

Being able to quantitatively show where the losses that caused the actual to be worse than the ideal tells you know how best to improve the system and gives you value as an engineer. Once completed, the report will naturally transition to the need to actually do the experiment to test the theory.

In the end, at the minimum, you need to show with math how the inputs are connected to the outputs via variables, and know which variables are most important over what ranges.

“No matter how bright you are or clever your theory is, if it doesn’t agree with experiment, it’s wrong.” ~Richard Feynman

“Through measuring is knowing.” ~Heiki Kammerlingh-Onnes


Washington State University