Friday, 15 April 2022

RecyclerView: Add dividers and spaces between items

It turns out that adding a divider or a space between items in a RecyclerView is non-trivial. You have to extend the RecyclerView.ItemDecoration abstract class and then get into the weeds of the RecyclerView class. It should be simple but this is the state of Android development. I'm sure you'd rather skip over the details and make a simple call as follows:

myRecyclerView.setColoredDivider(dividerColor, dividerHeightPixels)
// or
myRecyclerView.setVerticalGap(verticalGapInPixels)

If so, then add the com.tazkiyatech:android-utils library to your project and benefit from the Kotlin extension functions defined in the RecyclerViewExtensions.kt file. If you like complexity, then I'll point you to the RecyclerViewColoredDividerItemDecoration.java and RecyclerViewVerticalGapItemDecoration.java files instead.

Friday, 24 December 2021

Gradle: Commands to make sense of your project's dependencies

In the code examples below, I assume you are working in a multi-project build and one of the subprojects in this build is named "app". If you are working in a single-project build, then the commands you want to run are of the form gradle someTask and not gradle :app:someTask.
The first and simplest command you can run to print a dependency graph for each and every configuration in your project is:

gradle :app:dependencies

If you want to print a dependency graph for only one of the configurations in your project, then the command to run is:

gradle :app:dependencies --configuration someConfiguration

To print a list of all of the configurations in your project, the command to run is:

gradle :app:dependencies | grep ' - '

Lastly, if all you want is to get insight into a single dependency within a single configuration in your project, then the comamnd to run is:

./gradlew :app:dependencyInsight --configuration someConfiguration --dependency someGroup:someName

Saturday, 20 November 2021

Book Review: Working Effectively With Legacy Code, by Michael Feathers

For me, this book lived up to the hype. Essentially, it defines legacy code as any code that does not have supporting tests and it provides lots of examples of how to get such code under test. I suppose, over time, I will forget the particular examples and techniques described in the book for getting code under test. However, I'm certain the principles will stick with me: Get the system under test before taking any steps to get it "right".

Below are some definitions:

Cover and Modify: Working with a safety net when we change software. The safety net isn't something that we put underneath our tables to catch us if we fall out of our chairs. Instead, it's like a cloak that we put over code we are working on to make sure that bad changes don't leak out and infect the rest of our software. Covering software means covering it with tests. This contrasts with the Edit and Pray approach to changing software.

The Legacy Code Dilemma: "When we change code, we should have tests in place. To put tests in place, we often have to change code."

Seam: "A seam is a place where you can alter behaviour in your program without editing in that place."

Enabling Point: "Every seam has an enabling point, a place where you can make the decision to use one behaviour or another."

Refactoring: "A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its existing behaviour. "

Command/Query Separation: "A method should be a command or a query, but not both. A command is a method that can modify the state of the object but that doesn't return a value. A query is a method that returns a value but that does not modify the object. Why is this principle important? There are a number of reasons, but the most primary is communication. If a method is a query, we shouldn't have to look at its body to discover whether we can use it several times in a row without causing some side effect."

Effects and Encapsulation: "Encapsulation is important, but the reason why it is important is more important. Encapsulation helps us reason about our code. In well-encapsulated code, there are fewer paths to follow as you try to understand it. Breaking encapsulation can make reasoning about our code harder, but it can it easier if we end up with good explanatory tests afterward. When we have test cases for a class, we can use them to reason about our code more directly. We can also write new tests for any questions that we might have about the behaviour of the code." 

Interception point: "A point in your program where you can detect the effects of a particular change. In some applications, finding them is tougher than it is in others. If you have an application whose pieces are glued together without many natural seams, finding a decent interception point can be a big deal. Often it requires some effect reasoning and a lot of dependency breaking." 

Characterisation test: "A test that characterises the actual behaviour of a piece of code. There's no "Well, it should do this" or "I think it does that." The tests document the actual current behaviour of the system. If we find something unexpected when we write them, it pays to get some clarification. It could be a bug. That doesn't mean that we don't include the test in our test suite; instead, we should mark it as suspicious and find out what the effect would be of fixing it." 

Scratch refactoring: A technique for learning about code which runs as follows: check out the code from your version-control system, forgot about writing tests, extract methods, move variables, refactor it whatever way you want to get a better understanding of it, and then throw that code away. It's a great way of getting down to the essentials and really learning how a piece of code works.

Single-Responsibility Principle: Every class should have a single responsibility: It should have a single purpose in the system, and there should be only one reason to change it. 

Interface Segregation Principle: "When a class is large, rarely do all of its clients use all of its methods. Often we can see different groupings of methods that particular clients use. If we create an interface for each of these groupings and have the large class implement those interfaces, each client can see the big class through that particular interface. This helps us hide information and also decreases dependency in the system. The clients no longer have to recompile whenever the large class does." 

Open/Closed Principle: "... when we have good design, we just don't have to change code much to add new features." 

Safety first: "Once you have tests in place, you can make invasive changes much more confidently."

Below are some heuristics for seeing responsibilities in existing code:

  1. Group methods: Look for similar method names. Write down all of the methods on a class, along with their access types (public, private, and so on), and try to find ones that seem to go together.
  2. Look at hidden methods: Pay attention to private and protected methods. If a class has many of them, if often indicates that there is another class in the class dying to get out.
  3. Look for decisions that can change: Look for decisions – not decisions that you are making in the code, but decisions that you've already made. Is there some way of doing something (talking to a database, talking to another set of objects, and so) that seems hard-coded? Can you imagine it changing?
  4. Look for internal relationships: Look for relationships between instance variables and methods. Are certain instance variables used by some methods and not others?
  5. Look for the primary responsibility: Try to describe the responsibility of the class in a single sentence.

Below are some quotes:

Foreword: "It's not enough to prevent the rot – you have to be able to reverse it."

Foreword: "... turn systems that gradually degrade into systems that gradually improve."

Preface: "Code can degrade in many ways, and many of them have nothing to do with whether the code came from another team."

Preface: "... legacy code is simply code without tests."

Preface: "Code without tests is bad code. It doesn't matter how well written it is; it doesn't matter how pretty or object-oriented or well-encapsulated it is. With tests, we can change the behaviour of our code quickly and verifiably. Without them, we don't know if our code is getting better or worse."

Preface: "Teams take serious chances when they try to make large changes without tests. It is like doing aerial gymnastics without a net. It requires incredible skill and a clear understanding of what can happen at every step."

Preface: "... don't be surprised if some of the steps you take to make changes involve making some code slightly uglier. This work is like surgery. We have to make incisions, and we have to move through the guts and suspend some aesthetic judgement."

Changing Software: "The difference between good systems and bad ones is that, in the good ones, you feel pretty calm after you've done that learning, and you are confident in the change you are about to make. In poorly structured code, the move from figuring things out to making changes feels like jumping off a cliff to avoid a tiger."

Working with feedback: "... safety isn't solely a function of care. I don't think any of us would choose a surgeon who operated with a butter knife just because he worked with care. Effective software change, like effective surgery, really involves deeper skills."

Working with feedback: "When we have a good set of tests around a piece of code, we can make changes and find out very quickly whether the effects were good or bad."

Working with feedback: "When we have tests that detect changes, it is like having a vise around our code. The behaviour of the code is fixed in place. When we make changes, we can know that we are changing only one piece of behaviour at a time. In short, we're in control of our work."

Working with feedback: "Unit testing is one of the most important components in legacy code work. System-level regression tests are great, but small, localised tests are invaluable. They can give you feedback as you develop and allow you to refactor with much more safety."

Working with feedback: "... in unit testing, we are usually concerned with the most atomic behavioural units of a system. In procedural code, the unit tests are often functions. In object-oriented code, the units are classes."

Working with feedback: "... qualities of good unit tests: (1) They run fast. (2) They help us localise problems."

Working with feedback: "A test is not a unit test if: (1) It talks to a database. (2) It communicates across a network. (3) It touches the file system. (4) You have to do special things to your environment to run it."

Working with feedback: "... when we cover our code with tests before we change it, we're more likely to catch any mistakes that we make."

Working with feedback: "Dependency is one of the most critical problems in software development. Much legacy code work involves breaking dependencies so that change can be easier."

Working with feedback: "When you break dependencies in legacy code, you often have to suspend your sense of aesthetics a bit. Some dependencies break cleanly; others end up looking less than ideal from a design point of view. They are like incision points in surgery: There might be a scar left in your code after your work, but everything beneath it can get better."

Working with feedback: "We want to make functional changes that deliver value while bringing more of the system under test. At the end of each programming episode, we should be able to point not only to code that provides some new feature, but also its tests."

Sensing and separation: "When we write tests for individual units, we end up with small, well-understood pieces."

The Seam Model: "Pulling classes out of existing projects for testing really changes your idea of what "good" is with regard to design." 

It takes forever to make a change: "Systems that are broken up into small, well-named, understandable pieces enable faster work." 

How do I add a feature?: "The most powerful feature-addition technique I know of is test-driven development (TDD)... We imagine a method that will help us solve some part of a problem, and then we write a failing test case for it. The method doesn't exist yet, but if we can write a test for it, we've solidified our understanding of what the code we are about to write should do."

I can't run this method in a test harness: "Good design is testable, and design that isn't testable is bad."

I can't run this method in a test harness: "... the pain that we feel working in a legacy code base can be an incredible impetus to change. We can take the sneaky way out [i.e. hack around problems], but unless we deal with the root causes, overly responsible classes and tangled dependencies, we are just delaying the bill. When everyone discovers just how bad the code has gotten, the costs to make it better will have gotten too ridiculous."

Effects and Encapsulation: "Encapsulation and test coverage isn't always at odds, but when they are, I bias toward test coverage. Often it can help me get more encapsulation later."

Effects and Encapsulation: "Encapsulation isn't an end in itself; it is a tool for understanding."

I need to make many changes in one area: "The discussions that you have about naming have benefits far beyond the work that you are currently doing. They help you and your team develop a common view of what the system is and what it can become."

I need to make a change: "... finding bugs in legacy code usually isn't a problem. In terms of strategy, it can actually be misdirected effort. It is usually better to do something that helps your team start to write correct code consistently. The way to win is to concentrate effort on not putting bugs into code in the first place."

Characterisation tests: "In nearly every legacy system, what the system does is more important than what it is supposed to do. If we write tests based on our assumption of what the system is supposed to do, we're back to bug finding again. Bug finding is important, but our goal right now is to get tests in place that help us make changes more deterministically."

Characterisation tests: "We aren't trying to find bugs [when writing tests for legacy code]. We are trying to put in a mechanism to find bugs later, bugs that show up as differences from the system's current behaviour. When we adopt this perspective, our view of our tests is different: They don't have any moral authority; they just sit there documenting what pieces of the system really do."

Dependencies on libraries are killing me: "Avoid littering direct calls to library classes in your code. You might think that you'll never change them, but that can become a self-fulfilling prophecy."

My application has no structure: "When teams aren't aware of their architecture, it tends to degrade." 

My application has no structure: "... architecture is too important to be left exclusively to a few people. It's fine to have an architect, but the key way to keep an architecture intact is to make sure that everyone on the team knows what it is and has a stake in it. Every person who is touching the code should know the architecture..."

My application has no structure: "If you have, say, a team of 20 people and only 3 people know the architecture in detail, either those 3 have to do a lot to keep the other 17 people on track or the other 17 people just make mistakes caused by unfamiliarity with the big picture."

Telling the Story of the System: "... explain the architecture of the system using only a few concepts, maybe as few as two or three... Pragmatic considerations often keep things from getting simple, but there is value in articulating the simple view. At the very least, it helps everyone understand what would've been ideal and what things are there as expediencies. The other important things about this technique is that it really forces you to think about what is important in the system." 

Telling the Story of the System: "Teams can only go so far when the system they work on is a mystery to them. In an odd way, having a simple story of how a system works just serves as a roadmap, a way of getting your bearing as you search for the right places to add features. It can also make a system a lot less scary."

Telling the Story of the System: "On your team, tell the story of the system often, just so that you share a view. Tell it in different ways. Trade off whether one concept is more important than another. As you consider changes to the system, you'll notice that some changes fall more in line with the story. That is, they make the briefer story feel like less of a lie. If you have to choose between two ways of doing something, the story can be a good way to see which one will lead to an easier-to-understand system."

Telling the Story of the System: "When we simplify and rip away details to describe a system, we are really abstracting. Often when we force ourselves to communicate a very simple view of a system, we can find new abstractions."

Telling the Story of the System: "If a system isn't as simple as the simplest story we can tell about it, does that mean that it's bad? No. Invariably, as systems grow, they get more complicated. The story gives us guidance."

My application has no structure: "... there is something mesmerising about large chunks of procedural code: They seem to beg for more."

Adding New Behaviour: "Often the work of trying to formulate a test for each piece of code that we're thinking of writing leads us to alter its design in good ways. We concentrate on writing functions that do some piece of computational work and then integrate them into the rest of the application."

This class is too big: "Many of the features that people add to systems are little tweaks. They require the addition of a little code and maybe a few more methods. It's temping to just make these changes to an existing class. Chances are, the code that you need to add must use data from some existing class, and the easiest thing is to just add code to it. Unfortunately, this easy way of making changes can lead to some serious trouble. When we keep adding code to existing classes, we end up with long methods and large classes. Our software turns into a swamp, and it takes more time to understand how to add new features or even just understand how old features work."

This class is too big: "What are the problems with big classes? The first is confusion. When you have 50 or 60 methods on a class, it's often hard to get a sense of what you have to change and whether it is going to affect anything else. In the worst case, big classes have an incredible number of instance variables, and it is hard to know what effects are of changing a variable. Another problem is task scheduling. When a class has 20 or so responsibilities, chances are, you'll have an incredible number of reasons to change it. In the same iteration, you might have several programmers who have to do different things to the class. If they are working concurrently, this can lead to some serious thrashing, particularly because of the third problem: Big classes are a pain to test."

This class is too big: "Classes that are too big often hide too much... when we encapsulate too much, the stuff inside rots and festers. There isn't any easy way to sense the effects of change, so people fall back on Edit and Pray programming. At that point, either changes take far too long or the bug count increases. You have to pay for the lack of clarity somehow."

This class is too big: "When you put new code into a new class, sure, you might have to delegate from the original class, but at least you aren't making it much bigger."

This class is too big: "If you add code in a new method, yes, you will have an additional method, but at the very least, you are identifying and naming another thing that the class does; often the names of methods can give you hints about how to break down a class into smaller pieces."

Seeing responsibilities: "Learning to see responsibilities is a key design skill, and it takes practice. It might seem off to talk about a design skill in this context of working with legacy code, but there really is little difference between discovering responsibilities in existing code and formulating them for code that you haven't written yet. The key thing is to be able to see responsibilities and learn how to separate them well. If anything, legacy code offers far more possibilities for the application of design skill than new features do. It is easier to talk about design tradeoffs when you see the code that will be affected, and it also easier to see whether structure is appropriate in a given context because the context is real and right in front of us."

Seeing responsibilities: "... we are not inventing responsibilities; we're just discovering what is there. Regardless of what structure legacy code has, its pieces do identifiable things."

Seeing responsibilities: "The more you start noticing the responsibilities inherent in code, the more you learn about it."

Seeing responsibilities: "If you can identify some of these responsibilities that are a bit off to the side of the main responsibility of the class, you have a direction in which you can take the code over time."

Seeing responsibilities: "... if you have the urge to test a private method, the method shouldn't be private; if making the method public bothers you, chances are, it is because it is part of a separate responsibility. It should be on another class." 

Single-goal editing: "I have this little mantra that I repeat to myself when I'm working: "Programming is the art of doing one thing at a time." When I'm pairing, I always ask my partner to challenge me on that, to ask me "What are you doing?" If I answer more than one thing, we pick one. I do the same for my partner. Frankly, it's just faster. When you are programming, it is pretty easy to pick off too big of a chunk at a time. If you do, you end up thrashing and just trying things out to make things work rather than working very deliberately and really knowing what your code does." 

Pair Programming: "... working in legacy code is surgery, and doctors never operate alone." 

We feel overwhelmed: "... I've visited teams with millions of lines of legacy code who looked at each day as a challenge and as a chance to make things better and have fun... The attitude we bring to the work is important." 

Dependency-breaking techniques: "Code is harder to understand when it is littered with wide interfaces containing dozens of unused methods. When you create narrow abstractions targeted toward what you need, your code communicates better and you are left with a better seam." 

Dependency-breaking techniques: "Your bias should be toward making changes that you feel more confident in rather than changes that give you the best structure. Those can come after your tests." 

Dependency-breaking techniques: "... when we don't have tests in place and we are trying to do the minimal work we need to get tests in place, it is best to leave logic alone as much as possible." 

Dependency-breaking techniques: "Naming is a key part of design. If you choose good names, you reinforce understanding in a system and make it easier to work with. If you choose poor names, you undermine understanding and make life hellish for the programmers who follow you." 

Dependency-breaking techniques: "Although singletons do prevent people from making more than one instance of a class in production code, they also prevent people from making more than one instance of a class in a test harness."

Friday, 21 August 2020

Espresso tests: Match child view by position within RecyclerView

Let's say you want to write a user interface test that matches the child view at a particular position within a RecyclerView and you want to assert some properties on that child view. When you search for this online, you'll come across a whole host of solutions that do work but, sadly, are not very fluent or Espresso-esque. You'll see solutions that will lead you to write assertions like the following:

onView(withRecyclerView(R.id.recyclerView).atPositionOnView(0))
    .check(matches(withText("Some Text")))

onView(withId(R.id.recyclerView))
    .check(matches(atPosition(0, withText("Some Text"))))

Like I said: sure, this works, but I feel we can do better. Most notably, what I would change about these is to match on the child view of interest instead and assert properties on that rather than matching on the entire RecyclerView. So what we're aiming for are assertions like the following:

onView(withPositionInRecyclerView(R.id.recyclerView, 0))
    .check(matches(withText("Some Text")))

It's a subtle change and maybe it's just me but I feel this is a little easier on the eye and makes the reading experience a little more pleasurable. If you've made it this far and you're bought in, here's the Matcher class you need to define to make this possible:

/**
 * A matcher that matches the child [View] at the given position
 * within the [RecyclerView] which has the given resource id.
 *
 * Note that it's necessary to scroll the [RecyclerView] to the desired position
 * before attempting to match the child [View] at that position.
 */
class WithPositionInRecyclerViewMatcher(private val recyclerViewId: Int,
                                        private val position: Int) : TypeSafeMatcher<View>() {

    override fun describeTo(description: Description) {
        description.appendText("with position $position in RecyclerView which has id $recyclerViewId")
    }

    override fun matchesSafely(item: View): Boolean {
        val parent = item.parent as? RecyclerView
            ?: return false

        if (parent.id != recyclerViewId)
            return false

        val viewHolder: RecyclerView.ViewHolder = parent.findViewHolderForAdapterPosition(position)
            ?: return false // has no item on such position

        return item == viewHolder.itemView
    }
}

And, for best practice, define a function that wraps this class as follows:

/**
 * @return an instance of [WithPositionInRecyclerViewMatcher] created with the given parameters.
 */
fun withPositionInRecyclerView(recyclerViewId: Int, position: Int): Matcher<View> {
    return WithPositionInRecyclerViewMatcher(recyclerViewId, position)
}

That's it. We're done. Happy testing and for more view matchers like this one, check out the android-test-utils repository.

Monday, 17 August 2020

Espresso tests: Wait until view is visible

Whilst searching online for some suggestions on how to wait for a particular View to become visible within an Espresso test, I noticed that all of the suggestions that defined a new ViewAction class, defined one that operated on the root view. I can see why this is necessary in situations where the View in question is not present in the view hierarchy and will enter the view hierarchy at a later point. However, if you're waiting on a View that's present in the view hierarchy to change from one state to another, it's much more elegant to match and operate on that one View specifically rather than matching and operating on the root view.

So, assuming you're waiting on a View that's present in the view hierarchy to change from INVISIBLE or GONE visibility to VISIBLE, you can define a ViewAction class as follows:

/**
 * A [ViewAction] that waits up to [timeout] milliseconds for a [View]'s visibility value to change to [View.VISIBLE].
 */
class WaitUntilVisibleAction(private val timeout: Long) : ViewAction {

    override fun getConstraints(): Matcher<View> {
        return any(View::class.java)
    }

    override fun getDescription(): String {
        return "wait up to $timeout milliseconds for the view to become visible"
    }

    override fun perform(uiController: UiController, view: View) {

        val endTime = System.currentTimeMillis() + timeout

        do {
            if (view.visibility == View.VISIBLE) return
            uiController.loopMainThreadForAtLeast(50)
        } while (System.currentTimeMillis() < endTime)

        throw PerformException.Builder()
            .withActionDescription(description)
            .withCause(TimeoutException("Waited $timeout milliseconds"))
            .withViewDescription(HumanReadables.describe(view))
            .build()
    }
}

And define a function that creates an instance of this ViewAction when called, as follows:

/**
 * @return a [WaitUntilVisibleAction] instance created with the given [timeout] parameter.
 */
fun waitUntilVisible(timeout: Long): ViewAction {
    return WaitUntilVisibleAction(timeout)
}

You can then call on this ViewAction in your test methods as follows:

onView(withId(R.id.myView)).perform(waitUntilVisible(3000L))

You can run with this concept and similarly define view actions that wait on other properties of the view to change state, e.g. waiting for the text of a TextView to change to some expected text.

For more view actions and view matchers like this one, check out the android-test-utils repository.

Espresso tests: Match child view by position within parent

I noticed whilst browsing online that all of the answers to the question "can I match the child at a particular index within a particular parent view" required the definition of a new Matcher class. Whilst it's not a huge deal to define a new Matcher class as and when needed, it's not necessary in this case. You can instead get a handle on a particular child of a particular view by joining up the view matchers offered by Espresso into a method as follows:
/**
 * @param parentViewId the resource id of the parent [View].
 * @param position the child index of the [View] to match.
 * @return a [Matcher] that matches the child [View] which has the given [position] within the specified parent.
 */
fun withPositionInParent(parentViewId: Int, position: Int): Matcher<View> {
    return allOf(withParent(withId(parentViewId)), withParentIndex(position))
}

You can use this method as follows:

onView(
    withPositionInParent(R.id.parent, 0)
).check(
    matches(withId(R.id.child))
)

For more view actions and view matchers like this one, check out the android-test-utils repository.

Thursday, 30 July 2020

Xcode UI tests: saveScreenshot(...) extension function

Taking a screenshot as part of an Xcode UI test and adding the screenshot to the test's output is a five-step shuffle as follows:

let screenshot = XCUIApplication().screenshot()
let attachment = XCTAttachment(screenshot: screenshot)
attachment.lifetime = .keepAlways
attachment.name = "SomeName"
add(attachment)

Wouldn't it be great if this could be reduced to a single line call as follows:

XCUIApplication().saveScreenshot(to: self, named: "SomeName")

To make this possible, all you need to do is add the following extension function to your UI testing bundle:

extension XCUIScreenshotProviding {
    
    func saveScreenshot(to activity: XCTActivity, named name: String) {
        let attachment = XCTAttachment(screenshot: screenshot())
        attachment.lifetime = .keepAlways
        attachment.name = name
        activity.add(attachment)
    }
}
You can find this extension function and others like it in the XCTestExtensions repo.