Learning Spring Boot 2nd Edition 80% complete w/ Reactive Web

This weekend I sent in the first draft for Chapter 2 – Reactive Web with Spring Boot. Even though this is Chapter 2, it’s 80% of the book. That’s because I’m writing Chapters 2, 3, and 4 last, due to the amount they depend on Reactive Spring.

This may sound rather awkward given Spring Boot has yet to release any tags for 2.0. Pay it note, there is a lot of action in Spring Framework 5.0.0 such that it’s already had several milestones. A big piece of this book is getting a hold of those reactive bits and leveraging them to build scaleable apps. The other part is how Spring Boot will autoconfigure such stuff.

Thanks to Spring guru Brian Clozel, there is an experimental project that autoconfigures Spring Boot for Reactive Spring, and will eventually get folded into the Spring Framework. Bottom line: Reactive Spring is available for coding today, albeit not with every feature needed. But since the target release date is May, there will be time for spit and polish against the book’s code base.

And now, an excerpt from Chapter 2, for your reading pleasure:


Learning the tenets of reactive programming

To launch things, we are going to take advantage of one of Spring Boot’s hottest new features: Spring 5’s reactive support. The entire Spring portfolio is embracing the paradigm of reactive applications, and we’ll focus on what this means and how we can cash in without breaking the bank.

Before we can do that, the question arises: what is a reactive application?

In simplest terms, reactive applications embrace the concept of non-blocking, asynchronous operations. Asynchronous means that the answer is coming later, whether by polling or by an event pushed backed to us. Non-blocking means not waiting for a response, implying we may have to poll for the results. Either way, while the result is being formed, we aren’t holding up the thread, allowing it to service other calls.

The side effect of these two characteristics is that applications are able to accomplish more with existing resources.

There are several flavors of reactive applications going back to the 1970s, but the current one gaining resonance is reactive streams due its introduction of backpressure.

Backpressure is another way of saying volume control. The consumer controls how much data is sent by using a pull-based mechanism instead of a traditional push-based solution. For example, imagine requesting a collection of images from the system. You could receive one or a hundred thousand. To prevent sthe risk of running out of memory in the latter, people often code page-based solutions. This ripples across the code base, causing a change in the API. And it introduces another layer of handling.

For example, instead having a solution return a risky collection like this:

public interface MyRepository {
List findAll();
}

We would instead switch to something like this:

public interface MyRepository {
Page findAll(Pageable p);
}

The first solution is simple. We know how to iterate over it. The second solution is also iterable (Spring Data Commons’s Page type implements Java’s Iterable interface), but requires passing in a parameter to our API specifying how big a page is and which page we want. While not hard, it introduces a fundamental change in our API.

Reactive streams is much simpler – return a container that lets the client choose how many items to take. Whether there is one or thousands, the client can use the exact same mechanism and take however many it’s ready for.

public interface MyRepository {
Flux findAll();
}

A Flux, which we’ll explore in greater detail in the next section, is very similar to a Java 8 Stream. We can take just as many as we want and lazily waits until we subscribe to it to yield anything. There is no need to put together a PageRequest, making it seemless to chain together controllers, services, and even remote calls.


Hopefully this has whet your appetite to code against Reactive Spring.

Happy coding!

 

Do you have real life struggles in SW development? Lessons learned in Ops? Come share them @NashvilleJUG

Ever battle a NoSQL data store for six hours straight? Installed an upgrade that destroyed a database? Have you spent two weeks learning a new library, language, or build tool, only to chuck it out the window? We’d love to hear about it at the Nashville Java Users Group.

We meet on the first Tuesday of every month in downtown Nashville. Beer and pizza are provided gratis.

We’re looking for people just like you, willing to share their most tragic or most exciting tale. Can you chat for 20 minutes? That’s all we ask.

Whether it’s a tale about beating your brains out over a Maven plugin or kicking a Redis server into oblivion, we’re excited to hear about it! Whether you scrapped some infernal JavaScript library, or just finished a new book that has changed your development perspective forever, let us know.

And you’ll find a bunch of others nodding along, saying “I know what you mean!” The Java community in Nashville is strong. We founded the group back in 2010, and have been growing every since. But we can’t operate without guests coming in, and pouring out their heart and experiences. We need you!

How to reach us:

The beauty of coding frontends with React

This industry can be quite brutal. Tools come and go. Programming styles invented fifty years ago suddenly become relevant. But I really enjoy when a certain toolkit nicely presents itself over and over as the way to go. I’m talking about React. Every wonder what it is that has made coding frontends with React so dang popular? Let’s take a peek.

What’s so good about React?

React innovates frontend development by moving the focus off of cobbling together DOM elements. Instead, it shifts things toward laying out a declarative UI and driving everything by a consolidated state model. Update the state and the layout changes automatically.

In traditional JavaScript toolkits, you find yourself writing DOM finagling code bits inside event handlers strewn throughout the code base. (jQuery, I’m looking at you!) Managing, organizing, and maintaining order of this code is a chore that isn’t hard to fail. It’s easy to NOT cleanup properly and let your app leak.

Get on with the example already!

With React, you lay out a series of HTML elements inside the code (and using ES6 makes your eyes stop bleeding!) based on properties and state.

FYI: Properties are read only attributes, State are updateable attributes. In this component, there are NO event handlers. Everything shown is passed through the constructor call and accessed via this.props.

Some people balk at how React mixes HTML with JavaScript in the same file. Frankly, I find keeping things small and cohesive like this as the right level of mixture

It’s possible to have optional components, and they can be based on the centralized state model. Flip a toggle or trigger off some other thing (RESTful payload?) and see components appear/disappear. (NOTE: React smoothly updates the DOM for you.)

Check out the fragment below:

Toward the bottom, orgsAndSpacesLoading is used as a stateful flag to indicate some data is loading. Using JavaScript’s ternary boolean check, it’s easy to display a Spinner. When the code fetching the data completed, it merely needs to update the state of this flag to false, and React will redraw the UI to show the <span> with two dropdowns.

When piecing together event handlers and DOM elements by hand puts you in this mindset of updating the screen you’re looking at. You start to think about hunting down elements with selectors, changing attributes, and monkeying around with low level constructs.

When working with React, you update the state and imagine React redrawing everything for you. The UI is redrawn constantly to catch up to the new state. Everything is about the state, meaning it’s best to invest effort designing the right state model. This pulls your focus up a distinct level, letting you think more about the big picture.

The state must flow

Another neat characteristic you start doing is pushing bits of state down into lower level components as read-only properties. You also push down functions as invocable properties. You may start with functions in the lower level components, but many of them work their way back to manipulating the state. And often the state works best when pulled toward the top. Hence, functions tend to move up, making lower level components easier driven by properties.

This component is a reusable HTML checkbox with a label. You feed it the name of a state attribute and it allows flipping the state attribute on or off. Changes are invoked by the passed in property function, handleChange. This function is actually passed into a lot of various components in this application. You can see how this component is invoked below:

  • The label text is provided – “OAuth?”
  • The name is connected to a property known as settings.oauthEnabled.
  • The function to respond to clicks is this.handleChange.
  • The raw state is passed down as a bit of a free for all.

The point is, nice little components are easy to put together. Bits of state and needed functions are easy to hand off. And we don’t waste frivolous time with building the DOM and thinking about triggering an update in one part of the UI from some other remote corner of the UI.

We simply update the relevant bits of state and let the system redraw itself as needed. Once you get warmed up to this style of building frontends, it’s hard to put it down.

Happy coding!

Check out my @SpringData and @SpinnakerIO talks from SpringOne Platform @S1P

Recently, my latest conference presentations have been released. You are free to check them out:

In the Introduction to Spring Data talk, I live code a project from scratch, using start.spring.io, Spring Data, and other handle Spring tools.

In the Spinnaker: Land of a 1000 Builds talk, I present the CI/CD (continuous integration/continuous delivery) multi-cloud tool Spinnaker:

Enjoy!

Tuning Reactor Flows

I previously wrote a post about Reactively talking to Cloud Foundry with Groovy. In this post, I want to discuss something of keen interest: tuning reactor flows.

When you use Project Reactor to build an application, is the style a bit new? Just trying to keep your head above water? Perhaps you haven’t even thought about performance. Well at some point you will. Because something big will happen. Like a 20,000 req/hour rate limit getting dropped on your head.

Yup. My development system mysteriously stopped working two weeks ago. I spotted some message about “rate limit exceeded” and rang up one of my friends in the Ops department to discover my app was making 43,000 req/hour. Yikes!

As I poured over the code (big thanks to the Ops team giving me a spreadsheet showing the biggest-to-smallest calls), I started to spot patterns that seemed like things I had seen before.

Reactor tuning a lot like SQL tuning

Long long ago, I learned SQL. As the saying goes, SQL isn’t rocket science. But understanding what is REALLY happening is the difference between a query taking twenty minutes vs. sub-second time to run.

So let’s back up and refresh things. In SQL, when you join two tables, it produces a cartesian product. Essentially, a table with n rows + a table with m rows, will produce a table with n x m rows, combining every possible pair. From there, you slim it down based on either relationships or based on filtering the data. What DBMS engines have had decades is learning is how to read your query and figure out the BEST order to do all these operations. For example, many queries will apply filtering BEFORE building the cartesian product.

In Reactor, when you generate a flux of data and then flatmap it to another flux, you’re doing the same thing. My reactor flow, meant to cache up a list of apps for Spinnaker, would scan a list of eighty existing apps and then perform a domain lookup…eighty times! Funny thing is, they were looking up the same domain EIGHTY TIMES! (SQL engines have caching…Reactor doesn’t…yet).

So ringing up my most experienced Reactor geek, he told me that it’s more performant to simply fetch all the domains in one call, first, and THEN do the flatmap against this in memory data structure.

Indexing vs. full table scans

When I learned how to do EXPLAIN PLANs in SQL, I was ecstatic. That tool showed me exactly what was happening in what order. And I would be SHOCKED at how many of my queries performed full table scans. FYI: they’re expensive. Sometimes it’s the right thing to do, but often it isn’t. Usually, searching every book in the library is NOT as effective as looking in the card catalog.

So I yanked the code that did a flatmap way at the end of my flow. Instead, I looked up ALL domains in a CF space up front and passed along this little nugget of data hop-to-hop. Then when it came time to deploy this knowledge, I just flatmapped against this collection of in memory of data. Gone were all those individual calls to find each domain.

.then(apps ->
	apps.stream()
		.findFirst()
		.map(function((org, app, environments) -> Mono.when(
			Mono.just(apps),
			CloudFoundryJavaClientUtils.getAllDomains(client, org))))
		.orElse(Mono.when(Mono.just(apps), Mono.empty())))

This code block, done right after fetching application details, pauses to getAllDomains(). Since it should only be done once, we only need one instance from our passed along data structure. The collection is gathered, wrapped up in a nice Mono, and passed along with the original apps. Optionally, if there are no domains, an empty is passed along.

(NOTE: Pay it no mind that after all this tweaking, the Ops guy pointed out that routes were ALREADY included in the original application details call, hence eliminating the need for this. The lesson on fetching a whole collection up front can be useful.)

To filter or not to filter, that is the question

Filtering is an art form. Simply put, a filter is a function to reduce rows. Being a part of both Java 8’s Stream API as well as Reactor’s Flux API, it’s pretty well known.

The thing is to watch out for if the filter operation is expensive and if it’s inside a tight loop.

Loop? Reactor flows don’t use loops, right? Actually, that’s what flatmaps really are. When you flatmap something, you are embedding a loop to go over every incoming entry and possibly generating a totally different collection. If this internal operation inside the flapmap involves a filter that makes an expensive call, you might be repeating that call too many times.

I used to gather application details and THEN apply a filter to find out whether or not this was a Spinnaker application vs. someone else’s non-Spinnaker app in the same space. Turns out, finding all those details was expensive. So I moved the filter inward so that it would be applied BEFORE looking up the expensive details.

Look at the following code from getApplications(client, space, apps):

return requestApplications(cloudFoundryClient, apps, spaceId)
	.filter(applicationResource ->
		applicationResource.getEntity().getEnvironmentJsons() != null &&
		applicationResource.getEntity().getEnvironmentJsons().containsKey(CloudFoundryConstants.getLOAD_BALANCERS())
	)
	.map(resource -> Tuples.of(cloudFoundryClient, resource))
	.switchIfEmpty(t -> ExceptionUtils.illegalArgument("Applications %s do not exist", apps));

The code above is right AFTER fetching application information, but BEFORE going to related tables to find things such as usage, statistics, etc. That way, we only go for the ones we need.

Sometimes it’s better to fetch all the data, fetch all the potential filter criteria, and merge the two together. It requires a little more handling to gather this together, but again this is what we must do to tailor such flows.

Individual vs. collective fetching

Something I discovered was that several of the Cloud Foundry APIs have an “IN” clause. This means you can feed it a collection of values to look up. Up until that point, I was flatmapping my way into these queries, meaning that for each application name in my flux, it was making a separate REST call for one.

Peeking at the lower level APIs, I spotted where I could give it a list of application ids vs. a single one. To do that, I had to write my flow. Again. By putting together a collection of ids, by NOT flatmapping against them (which would unpack them), but instead using collectList, I was able to fetch the next hop of data in one REST call (not eight), shown below:

return PaginationUtils
	.requestClientV2Resources(page -> client.spaces()
		.listApplications(ListSpaceApplicationsRequest.builder()
			.names(applications)
			.spaceId(spaceId)
			.page(page)
			.build()))
	.map(OperationUtils.<ApplicationResource, AbstractApplicationResource>cast());

cf-java-client has an handy utility to wrap paged result sets, iterating and gathering the results…reactively. Wrapped inside is the gold: client.spaces().listApplications(). There is a higher level API, the operations API, but it’s focus is replicating the CF CLI experience. The CF CLI isn’t built to do bulk operations, but instead operate on one application at a time.

While nice, it doesn’t scale. At some point, it can a be a jump to move to the lower level APIs, but the payoff is HUGE. Anyhoo, by altering this invocation to pass in a list of application names, and following all the mods up the stack, I was able to collapse eighty calls into one. (Well, two, since the page size is fifty).

You reap what you sow

By spending about two weeks working on this, I was able to replace a polling cycle that perform over seven hundred REST calls with less than fifty. That’s basically a 95% reduction in network traffic, and nicely put my app in the safe zone for the newly imposed rate limit.

I remember the Ops guy peeking at the new state of things and commenting, “I’m having a hard time spotting a polling cycle” to which the lead for Cloud Foundry Java Client replied, “sounds like a good thing.”

Yes it was. A VERY good thing.

Reactively talking to Cloud Foundry with Groovy

I’ve been working on this Spinnaker thing for over a year. I’ve coded support so Spinnaker can make continuous deployments to Cloud Foundry. And the whole thing is written in Groovy. I recently upgraded to that I can now talk reactively to Cloud Foundry with Groovy.

And it’s been a nightmare.

Why?

Groovy is pretty darn wicked. Coding Spring Boot apps mixed with Spring MVC controllers in the terse language of Groovy is nothing short of gnarly. But it turns out there’s a couple things where Groovy actually gets in your way.

Reactor + Cloud Foundry

Want a taste? The code fragment below shows part of a flow used to look up Spinnaker-deployed apps in Cloud Foundry:

operations.applications()
  .list()
  .flatMap({ ApplicationSummary appSummary ->
    operations.applications()
      .getEnvironments(GetApplicationEnvironmentsRequest.builder()
        .name(appSummary.name)
        .build())
      .and(Mono.just(appSummary))
  })
  .log('mapAppToEnv')
  .filter(predicate({ ApplicationEnvironments environments, ApplicationSummary application ->
    environments?.userProvided?.containsKey(CloudFoundryConstants.LOAD_BALANCERS) ?: false
  } as Predicate2))
  .log('filterForLoadBalancers')
  .flatMap(function({ ApplicationEnvironments environments, ApplicationSummary application ->
    operations.applications()
      .get(GetApplicationRequest.builder()
        .name(application.name)
        .build())
      .and(Mono.just(environments))
  } as Function2))

This is the new and vastly improved Cloud Foundry Java SDK built on top of Project Reactor’s async, non-blocking constructs (Mono and Flux with their operations). Every function call is an async, non-blocking operation fed to the next function call when the results arrive.

What does this code do? It looks up a list of Cloud Foundry apps. Iterating over the list, it weeds anything that doesn’t have a LOAD_BALANCER environment variable, a tell for Spinnaker-deployed apps. Finally it looks up the detailed record for each application.

The heart of the issue

What’s nestled inside several of these “hops” in this flow is a tuple structure. In functional flows like where each hop gets a single return, we often need to pass along more than one piece of data to the next hop. It’s the side effect of not using the imperative style of building up a set of variables, but instead passing along the bits in each subsequent funtion call.

cf-java-client has TupleUtils, a collection of functions meant to pack and unpack data, hop to hop. It’s elegant and nicely overloaded to support up to eight items passed between hops.

And that’s where Groovy falls flat. Groovy has this nice feature where it can coerce objects. However, with all the overloading, Groovy gets lost and can’t tell which TupleUtils function to target.

So we must help it by coercing it into the right structure. See those “as Function2”  and “as Predicate2” calls? That helps Groovy figure out the type of lambda expression to slide things into.

And it’s dragging me down!

The solution

So I finally threw in the towel and converted this one class into pure Java.

Yes, I ditched hip and cool Groovy in favor of the old warhorse Java.

You see, when something is so dependent on every character being in the right place, we need all the static support from the IDE we can get. Never fear; I’m not dropping Groovy everywhere. Just this one class.

And here is where Groovy’s interoperability with Java shines. Change the suffix of one file. Make the changes I need. And both the IDE and the compiler is happy, giving me an operational chunk of code.

I had to rewrite a handful of collections, but it wasn’t the worse thing in the world. In half a day, I had successfully moved the code. And now as I’m working on another flow, the pain of Groovy’s need for coercion specification is no longer wreaking havoc.

Cheers!

 

Dos and Don’ts of Marketing

When it comes to selling books, there are gobs of opinions out there. And there is no one way. But there are many dos and don’t when it comes to marketing. In this post, we’ll try to capture a handful of them.

Do – take advantage of every opportunity to market

Never ever EVER pass up a captive audience. When someone reads your book to the end, they will ALWAYS read the page AFTER the end of the story. (Don’t you do the same?)

Key things to include:

  • First chapter of the book’s sequel.
  • First chapter of another work if the current title isn’t a series.

After the chapter include a link to sign up for your email list. This is called “going for the ask”. It’s tough for introverts but a time tested recipe in marketing and sales.

Don’t – publish your series all at once.

“I wrote a series. Can I put it all out there at once?”

No. Don’t do this. At all.

Did you pour your heart and soul into these works? Do you want your readers to get them all? Does it give you a warm fuzzy knowing they have your complete works?

Sorry, but emotions are running rampant. I understand the excitement of wanting your audience to gobble up everything. Take a deep breath and don’t rush it. Rome wasn’t built in a day and neither can your following.

When publishing blog articles, it’s good to drive traffic to one place on your site. Don’t tweet asking people to visit two different parts of your site at the same time. Instead, lead them to a single page on your site talking about the first book. If they like it, the tail of the book can include the hook for the second followed by a buy link. Rinse/repeat.

Why? Because all of these leads to Amazon rankings. And it’s better to slam one title into the Top 50 for a given genre than working two titles into the Top 1000. Focusing all marketing on one title is key. As shown, rankings help Amazon show things like “frequently bought together”.

Do – seek a long term path with many works

There’s an old adage that quality beats quantity. That is quite true. To a certain point. If you can write a great novel, market it superbly, and build a fanbase, you’ll find that it can help sell more books. Many famous authors started that way. A quality novel can jumpstart your writing future.

But at a certain point, your ability to market may/may not do the trick. That is when quantity can overtake and leave quality in the dust. If you look at many historically famous authors, some of the most successful actually wrote LOTS of novels.

If you can publish a dozen novels, odds will stack in your favor over an author that only writes a single novel, and expects to make it big with that. The thing is, try to focus on marketing one at a time. If you try and market multiple titles at a time while building your fanbase, you may accidentally confuse your fanbase.

From time to time, I may mention my older titles, but in general, I pour all my marketing effort into the latest one.

Do – keep making updates to your site

Never forget – your website is supposed to help people discover you, find out that they like you, interact with you, and ultimately buy your wares. Make fluid adjustments to your site as things change.

  • Offering Black Friday discounts? Put a temporary banner ad at the top of your site.
  • Written one or more books? Create a page for each.
  • Written a series? Write a page talking about the series, with each title in order, linking to each title’s page.
  • Written a blog article series? Craft a menu and put it on the sidebar.
  • Give away handouts when you go to sell books at fairs? Put the handout on a page.
  • Want people to Tweet/email/Facebook? Create a /contact or /me page.

 

Don’t – post just to sell

Something a lot of people have a hard time getting to grips with is that blogging, tweeting, and facebooking shouldn’t be just about selling. In fact, it’s recommended to confine actual selling to less than 20%.

We can all smell an oily salesman. Don’t turn yourself into one.

People will read your blog articles if it carries information they are interested in, and if they find value in it. When you are pitching product, the perceived “info” drops quickly.

Do – use content you’ve written in the past in a conversation

Your website should be your main marketing tool, with Twitter and Facebook the place to put out bread crumbs. Don’t hesitate to share a page or a post pertinent to a discussion on Twitter or Facebook.

Don’t forget, this isn’t just about selling product. In fact, I recently blogged a fragment of an older book when the topic of test coverage surfaced.

I followed that Twitter conversation with a blog post, the flaws of test coverage.

Do you have any tips that has helped you market? Share them in the comments.

How I lost weight during the holidays

…and managed to enjoy myself. It’s true. This past year, I lost weight during the holidays, and it didn’t kill me.

On December 1st last year, I checked in on MyFitnessPal.com at 222.4 lbs, and at the end of the month, had dropped to 217 lbs. That’s 5.4 lbs lost. And I still munched on oatmeal scotchies, sweet minglers, fudge and other things.

Getting serious about health

That last statement may not be an accurate portrayal of things, so let’s back up. Over three years ago, I got interested in better health, so I stopped drinking soda. That was tough, but I can proudly say I’ve not had any soda except for extenuating circumstances like once having a stomach bug and Sprite being the only thing I could keep down. Another situation where I needed caffeine to drive, and the only thing available was a Diet Coke.

But my weight losing goals really got going a year and a half ago. I stepped onto a scale and weighed in at 245. Yikes! I began using MyFitnessePal.com to track everything I ate. I managed to drop about ten pounds, and then things plateaued.

I was diagnosed with sleep apnea. My doctor duly informed me that aggressively losing weight can help with apnea. (Seems like doctors will say EVERYTHING is helped by losing weight, right?) I tried and tried and tried, but it seems like I couldn’t get below that 230 lbs. floor. So I threw in the towel.

New approach to things

In March of last year, I read some new articles on health and diet. The most outrageous article pointed out that today’s typical breakfast of cereal is nutritionally equivalent to halloween candy. That one took a bit to settle in, but I realized it’s true. Our vaunted high grain, low fat diet espoused by the food pyramid is ridiculous and not grounded in real research, but is instead a huge experiment (that is failing).

Then in October, my wife was introduced by a friend to the Trim Healthy Mama plan. The second she explained it to me, I was onboard.

It incorporates several elements:

  • Eat something every three hours, because that is the time it takes your body to process. Avoids you feeling starved, and also gives you grace to fall off the plan, but get back on it without much hoopla.
  • Eat a healthy protein in each meal combined with either a good fat or a good carb, but not both. Your body processes either fat or carbs at any one time, but not both. And it favors fats, so carbs get stored.

That’s it! Doesn’t sound that hard. Well the people behind Trim Health Mama have published a ginormous recipe book and there are pinterest groups posting recipes all the time.

A major shift in our diet was to to virtually eliminate all sugar and classic flour. That’s how you move off of bad carbs and moved onto good ones. We use a lot of stevia and what’s known as “TMM Baking Blend” which is a gluten free, oat-based flour. The glycemic index of this stuff is much lower, and keeps your blood sugar from spiking.

By confining what you eat in any given time 3-hour window, your body can actually burn through things and help you start losing weight.

Old tasty stuff – gone, new tasty stuff – in

How about some real examples? Breakfast cereal, pancakes, waffles, and donuts are loaded with sugar and bad carbs. And we’re not talking just Frosted Flakes. Almost every breakfast cereal, whether it’s granola, Honey Smacks, or Bob’s Whole Grain Cereal, has about the same calories and sugar content. Off the menu. (This part makes me cry. I LOVE this stuff!!)

What are some things that are in? Try bacon and eggs. Yum!!! There is another great dish called French Toast in a Bowl. It’s a scoop of Baking Blend, an egg, a little butter, and a packet of stevia.

Other stuff to eat includes triple zero yogurt, and several chicken and beef recipes. Also look uncured meats. Uncured means they aren’t coated with sugar. (Yes, they make uncured bacon.)

We have gotten a lot of mileage out of our crock pot, making some Indian chicken dishes as well as chicken-based white chili. It’s also not hard to retool some existing recipes by swapping out sugar and traditional flour.

My absolute favorite (after bacon and eggs) includes the Trimtastic Chocolate Cake.

I learned the difference between chocolate and cocoa. Cocoa doesn’t have hardly any calories. If you combine it in a recipe with real butter (good fat), stevia (zero calorie) and almond milk (low calorie), you have the taste of chocolate without the ugly baggage.

That recipe lets you eat REAL whipped cream (made it myself) combined with dark chocolate. Mmm!!! We made one and brought it to Thanksgiving this year. My father-in-law, not on the plan, thought it was delicious.

Don’t sweat going off plan now and then

Reader: “You’re showing the same image twice.”
Writer: “I really like bacon.”

The biggest failure we all have is getting off our diet. With the Trim Healthy Plan, it’s okay to go off now and again. You can get back on three hours later. So during Thanksgiving, I didn’t try to starve myself. I just went off plan that day as I feasted on turkey, ham, dinner rolls, and sweet potato casserole. The next morning, eggs and bacon. (So good!)

When our Writer’s Group met before Christmas for a dinner party, we went off plan. No big deal!

And Christmas goodies? I was able to enjoy them without feeling guilty. Because I know there is a delicious, on-plan meal around the corner. And that’s how I lost weight during the holidays, slowly but surely. Maybe not as fast as early November, but this isn’t a sprint, it’s a marathon.

Happy New Year!

The many flaws of test coverage

Recently in a Twitter chat with a couple friends of mine, the subject of test coverage re-appeared. I rolled my eyes. Ready to start ranting, I remembered already covering the many flaws of test coverage in Python Testing Cookbook. So I thought, perhaps an excerpt would be better.

From chapter 9, Python Testing Cookbook..

****

Coverage Isn’t Everything

You’ve figured out how to run coverage reports. But don’t assume that more coverage is automatically better. Sacrificing test quality in the name of coverage is a recipe for failure.

How to do it…

Coverage reports provide good feedback. They tell us what is getting exercised and what is not. But just because a line of code is exercised doesn’t mean it is doing everything it is meant to do.

Are you ever tempted to brag about coverage percentage scores in the break room? Taking pride in good coverage isn’t unwarranted, but when it leads to comparing different projects using these statistics, we are wandering into risky territory.

How it works…

Coverage reports are meant to be read in the context of the code they were run against. The reports show us what was covered and what was not, but this isn’t where things stop. Instead, it’s where they begin. We need to look at what was covered, and analyze how well the tests exercised the system.

It’s obvious that 0% coverage of a module indicates we have work to do. But what does it mean when we have 70% coverage? Do we need to code tests that go after the other 30%? Sure we do! But there are two different schools of thought on how to approach this. One is right and one is wrong:

  • The first approach is to write the new tests specifically targeting the uncovered parts while trying to avoid overlapping the original 70%. Redundantly, testing code already covered in another test is an inefficient use of resources.
  • The second approach is to write the new tests so they target scenarios the code is expected to handle, but which we haven’t tackled yet. What was not covered should give us a hint about what scenarios haven’t been tested yet.

The right approach is the second one. Okay, I admit I wrote that in a leading fashion. But the point is that it’s very easy to look at what wasn’t hit, and write a test that shoots to close the gap as fast as possible.

There’s more…

Python gives us incredible power to monkey patch, inject alternate methods, and do other tricks to exercise the uncovered code. But doesn’t this sound a little suspicious? Here are some of the risks we are setting ourselves up for:

  • The new tests may be more brittle when they aren’t based on sound scenarios.
  • A major change to our algorithms may require us to totally rewrite these tests.
  • Ever written mock-based tests? It’s possible to mock the target system out of existence and end up just testing the mocks.
  • Even though some (or even most) of our tests may have good quality, the low quality ones will cast our entire test suite as low quality.

The coverage tool may not let us “get away” with some of these tactics if we do things that interfere with the line counting mechanisms. But whether or not the coverage tool counts the code should not be the gauge by which we determine the quality of tests.

Instead, we need to look at our tests and see if they are trying to exercise real use cases we should be handling. When we are merely looking for ways to get more coverage percentage, we stop thinking about how our code is meant to operate, and that is not good.

Are we not supposed to increase coverage?

We are supposed to increase coverage by improving our tests, covering more scenarios, and by removing code no longer supported. These things all lead us towards overall better quality.

Increasing coverage for the sake of coverage doesn’t lend itself to improving the quality of our system.

But I want to brag about the coverage of my system!

I think it’s alright to celebrate good coverage. Sharing a coverage report with your manager is alright. But don’t let it consume you.

If you start to post weekly coverage reports, double check your motives. Same goes if your manager requests postings as well.

If you and yourself comparing the coverage of your system against another system, then watch out! Unless you are familiar with the code of both systems and really know more than the bottom line of the reports, you are probably wandering into risky territory. You may be headed into faulty competition that could drive your team to write brittle tests.

****

Agree? Disagree? Feel free to put in your own opinions on the pros and cons of test coverage reports in the comments section.

How Guidance saved Christmas with Spring Boot

I hope you all have settled down with a hot cup of cocoa. Because it’s time for the most beloved Christmas tale of all. The one where Guidance Saved Christmas with Spring Boot.

Guidance the Elf had seen Santa facing new issues. It seemed like managing the list of children in addition to invoicing and warehouse inventory was harder than ever. Scaling was becoming a bug bear. Guidance was saddened at the challenges faced. But he had to report for duty in the Turnquist household.

One night, after having made his first appearance, Guidance spotted Learning Spring Boot.

“What’s this?” he thought. So he sat down and read the whole thing. (Elves can read entire books in one night, you know). Reading the book, his eyes opened wide. Spring Boot might just do the trick!

The following night, after everyone had gone to sleep, Guidance found Greg’s laptop, and fired up IntelliJ. Using the code examples from the book, he was able to draft up some new ideas.

“Wow! Wait until Santa see this!”

The next night, Guidance watched the Learning Spring Boot video, and saw even more things not covered in the first book. (Guidance used earphones so as not awake anyone while watching the video).

Using new things learned in the video, he made more changes to his demo app. He planned a demo the following night with Santa’s technical team, including how the video showed debugging in the cloud using Spring Tool Suite.

The team was impressed. They began to talk among themselves. Their technical troubles could be cured!

A few nights later on ElfSlack, the senior designer contacted Guidance. The buzz about Spring Boot had spurred him to buy copies of the book and video for the whole team. But that wasn’t what he was calling about. Instead, he wanted to share something more exciting than that.

A 2nd Edition was in progress. A newer version of the book that would include Spring 5 and Spring Boot 2, including its reactive streams-based, non-blocking, async programming model. Guidance blinked with excitement.

Guidance had already coded half a dozen sample app with eagerness. Spring Boot had changed his view of writing software. But the idea that he could seamlessly write reactive code without giving up the existing power of Spring was unbelievable.

This amazed him so much that he logged onto Amazon and pre-ordered his own copy.

He had seen more magic this year than all other Christmases combined.

So much, in fact, that he had a new idea.

“I wonder if I could convince James Watters to made a special trip to the North Pole and give a talk about Pivotal Cloud Foundry.”

The answer to that…is another tale.

Merry Christmas to all and to all a good night!