Exited (143)

One day, I had to resolve a JIRA ticket with the following message:


(Some steps here)

Expected Result:
Every container should be up & running.

Actual Result:
A few containers are down

There was also a nice figure presenting that state:


The first time a read the JIRA ticket I did not pay much attention to the figure above. I just logged to the server to see what happened with the running applications in the docker containers.

There was nothing interesting. Apps seemed to had been working properly (without any exceptions, crashes etc.) …and suddenly they shut down…and they did that gracefully (meaning, they closed the connections and any other IOs, etc.).

It all looked like, they were asked to shut down…in fact the figure above says that right away in the STATUS column. It says “Exited (143)”…

I am not an Dev-Ops or Admin so I head to make sure that the signal code is in fact the one I suspected….

There are two very important references regarding SIGNALS in unix: Unix Signals and exit codes with special meaning.

The first one is just a list of the signals that are used in the Unix system. You can get the same list by typing

kill -l

in terminal.

These signals are numbered from 1-34 and none of these numbers much 143 code…

That is why the second page is more interesting as it tells us how to interpret these exit codes:


Fatal error signal “n”

kill -9 $PPID of script

$? returns 137 (128 + 9)


If do the simple math: 143 – 128 = 15 we get our signal SIGTERM that killed the process.

Of course, we could google more to find out what does SIGTERM do etc…But the most important thing here is that the apps got killed with SIGTERM signal….so they were asked to shut down by somebody or some external process.

This was not enough for the reporter of the JIRA ticket.

Is there anything that we can do more? Yes, if we know when such event had happened.

Production Unix systems should have a logs that will tell us who might have logged in and what commands the user issued.

If you are dealing with RedHat (this is the server that I worked with) you will find the logs under:

less /var/log/secure-20170716

Jul 13 10:35:46 ip-x-x-x-x sshd[4494]: Accepted publickey for xx-user from xx.xxx.xx.xx port XXXXX
Jul 13 10:35:46 ip-x-x-x-x sshd[4494]: pam_unix(sshd:session): session opened for user xx-user by (uid=0)
Jul 13 10:35:46 ip-x-x-x-x sudo: xx-user : TTY=unknown ; PWD=/home/xx-user ; USER=root ; COMMAND=/usr/bin/pkill -f some_app.jar
Jul 13 10:35:46 ip-x-x-x-x sshd[4496]: Received disconnect from xx.xxx.xx.xx: 11: disconnected by user
Jul 13 10:35:46 ip-x-x-x-x sshd[4494]: pam_unix(sshd:session): session closed for user xx-user

Above log gives us a definite proof that there was a ‘xx-user’ that logged in and run command:

COMMAND=/usr/bin/pkill -f some_app.jar

And this is our SIGTERM kill signal that shut down the app.

In the end it appeared that there was a Jenkins job that logged in, killed, deployed and run new app jar (which happened to be the same app that was run with docker).

This was very bizarre because we were not aware that the Jenkins job kills the app with pkill command, which goes through the whole list of processes and tries to find a match by the name….and it kills the process… So it killed our docker app.



Java: ZipOutputStream

Today, a colleague had a problem, that was very odd. He wrote a few lines of code, that did a following things:

  • query some data source
  • write the data to zip file
  • place that zip on the ftp

That does not sound like a rocket since, does it? Still, he got himself a bit lost… because when he executed the code…out of 7 zip files written to ftp…there was always at least 1 that could not be unpacked! And it happened randomly! There was always some file broken…

Curious, what could it be? Let’s have a look at the code:

    public void call() throws Exception {
        ResultSet rs = null;
        FTPClient ftpClient = null;
        Writer writer = null;
        PreparedStatement stmt = null;
        try (
            Connection connection = initializeConnection()) {
            stmt = prepareData(connection);
            ftpClient = initFTPClient();
            writer = createWriter(ftpClient);
            rs = stmt.executeQuery();
            while (rs.next()) {
                writeData(rs, writer);
            cleanUp(stmt, ftpClient, writer, rs);
        } finally {
            cleanUp(stmt, ftpClient, writer, rs);


    private Writer createWriter(FTPClient ftpClient) throws IOException {
        final String filename = "DataFeed" + "_" + new Date();
        OutputStream os = ftpClient.storeFileStream(new File("somePath").getPath() + "/" + filename + ".zip");
        ZipOutputStream zos = new ZipOutputStream(os);
        zos.putNextEntry(new ZipEntry(filename + ".txt"));
        return new BufferedWriter(new OutputStreamWriter(zos));

    private void writeData(ResultSet rs, Writer writer) throws SQLException, IOException {

    private void cleanUp(PreparedStatement s, FTPClient f, Writer w, ResultSet rs)
        throws SQLException, IOException {
        if (rs != null && !rs.isClosed()) {
        if (s != null && !s.isClosed()) {
        if (w != null) {
        if (f != null && f.isConnected()) {

Have you already spotted the problem? It all seem to be quite right, isn’t it? The OutputStream is created, wrapped in the ZipOutputStream, and next in the BufferedWriter, etc. The most outer wrapper’s ‘close‘ method is called, which calls all other ‘close‘ methods of the wrapped objects…it should work! But it does not…. have a look at the following line of code..

    private Writer createWriter(FTPClient ftpClient) throws IOException {
        final String filename = "DataFeed" + "_" + new Date();
        OutputStream os = ftpClient.storeFileStream(new File("somePath").getPath() + "/" + filename + ".zip");
        ZipOutputStream zos = new ZipOutputStream(os);
        zos.putNextEntry(new ZipEntry(filename + ".txt"));
        return new BufferedWriter(new OutputStreamWriter(zos));

What does it do? Well it appears, that we are doing something more than just writing to the OutputStream here. What is that ‘Entry‘ thing? Well, this marker is very important, as it turns out! Because we are working with ZipOutputStream, we are also working with its File system. What? A file system? Exactly! ‘Entry‘ tells the stream, that from now on, everything that is written to it should go under the file with that entry name. Therefore, in fact, we are writing to a file that is in the zip file system. And we need to tell the stream where the file ends! How would we do that? Of course, by closing it! 😉

ZipOutputStream zos = new ZipOutputStream(os);
zos.putNextEntry(new ZipEntry(filename + ".txt"));
zos.write(.....write something to the file with 'filename.txt'...);

So what do you say? Wasn’t it a bit tricky? Well, for sure, it was not that obvious to us!


While working with the ZipOutputStream, always remember to close the files (Entries) in it!!!

Mockito: the world order

How about verifying the order of the mocks? Interested? I hope you are!

Let’s say that we are about to implement following user story.

User story:
We expect some feed on the input. The feed is sent with some ID, that lets us distinguish it from other feed we have already saved in the database. And if the feed appears, the following should happen:

  • read the feed and extract its ID
  • if a client requests feed merging option, than:
    • get the most latest feed from the repository
    • (merge) copy the data from the latest feed to the one currently being processed
  • save the feed in the repository

Let’s try to approach it by starting off with writing tests first (yes, we like TDD).

Test Case 1:

So we may want to start with the simple scenario when the client does not expect the feeds to be merge. Therefore we just read, extract and save the feed. Sounds simple…

public void shouldReadAndSaveTheFeed() throws Exception {
    // given
    FeedRepository mockFeedRepository = Mockito.mock(FeedRepository.class);
    FeedExtractor mockFeedExtractor = Mockito.mock(FeedExtractor.class);
    FeedService feedService = new FeedService(mockFeedExtractor, mockFeedRepository);

    // when
    feedService.process(sampleInputFeed(), false);

    InOrder inOrder = Mockito.inOrder(mockFeedExtractor, mockFeedRepository);
    inOrder.verify(mockFeedRepository, Mockito.times(1)).save(Mockito.anyCollection());

So what have happened here?

Well, we created a sample FeedService class where we will put the user story implementation. From the business requirements we know that we need to extract some data from the feed and save the feed to some repository, therefore we have created mocked classes for both of theses elements.

Next, we demand a specific order of invocation on the mocks. To make it happen, we created InOrder object from the mocks that we have in our test (Note: the order does not matter here).

InOrder inOrder = Mockito.inOrder(mockFeedExtractor, mockFeedRepository);

Finally, we interact with the InOrder object by requesting it to verify that following interactions happened (Note: here, the order is important!).

inOrder.verify(mockFeedRepository, Mockito.times(1)).save(Mockito.anyCollection());

The order of these requests define the order we demand to happen. As a second parameter in verify method we can add i.e. expected number of interactions with the mock.

Let’s move on to next case.

Test Case 2:

public void shouldMergeTheFeedsIfRequested() throws Exception {
    // given
    FeedRepository mockFeedRepository = Mockito.mock(FeedRepository.class);
    FeedExtractor mockFeedExtractor = Mockito.mock(FeedExtractor.class);
    FeedMerger mockFeedMerger = Mockito.mock(FeedMerger.class);
    FeedService feedService = new FeedService(mockFeedExtractor, mockFeedMerger, mockFeedRepository);

    // when
    feedService.process(sampleInputFeed(), true);

    InOrder inOrder = Mockito.inOrder(mockFeedExtractor, mockFeedMerger, mockFeedRepository);
    inOrder.verify(mockFeedMerger).merge(Mockito.anyCollection(), Mockito.anyCollection());

Here, we have added another mock (mockFeedMerger) and declared a new interaction flow using the same approach as in the first test case.

If the test does not pass the order, than following failure message will be presented:

Verification in order failure
Wanted but not invoked:
-> at com.gog.bjava.PreservingOrderTests.shouldMergeTheFeedsIfRequested(PreservingOrderTests.java:45)
Wanted anywhere AFTER following interaction:
-> at com.gog.bjava.PreservingOrderTests$FeedService.process(PreservingOrderTests.java:72)

Worth remembering:
– it is easy to accidentally skip some interaction without noticing it, and end up with different flow that is PASSING! For example, if we remove any of the inOrder.verify(..) lines, than the test will PASS, because the overall order of interaction is preserved. However, if we change the order of the inOrder.verify(..) lines, than the test will be FAILING.
– remember that it is possible to add additional verification condition for how many times the mock should be triggered (see. the example with Mockito.times(1) above)

jUnit4: expecting an unexpected (Exceptions)

While testing the apps, we should never forget about verifying the unhappy paths – the Exceptions.
Below are a few ways to do it in jUnit4.

Example 1. 

Simply, use the existing annotation @Test with predefined value expected=ClassName.class.

@Test(expected = RuntimeException.class)
public void shouldThrowAnRuntimeException() throws Exception {
    // when
    throw new RuntimeException("Some message");

Although this is the simplest way to validate an exception in our test case, it is also the less expressive i.e. we cannot validate the message that is carried with the exception.

Example 2.

Lack of expressiveness in previous example is handled in jUnit4 by @Rule annotation (see below).

First we need to define the rule as a field in our test class:

public ExpectedException thrown = ExpectedException.none();

..and than we can use it in test methods:

public void throwsExceptionWithSpecificType() {
    // then
    thrown.expectMessage("There was some problem with parameters");
                    Matchers.equalTo("[importantParameter] should not be null"))

    // given
    String importantParameter = null;

    // when
    try {
        Objects.requireNonNull(importantParameter, "[importantParameter] should not be null");
    } catch (NullPointerException e) {
        throw new RuntimeException("There was some problem with parameters", e);

As you can see in the above code sample, first we need to define the expected behavior, which might look a bit odd at first. Next, we prepare and execute the code that is under the test.
The clear benefit of using this approach is the possibility to verify the message type, its content, the root cause and its message as well. Moreover, all of these checks can be expressed with hamcrest matchers, which I find very beneficial.

Example 3.

Of course, we can forget about jUnit and all of these annotations that it offers to us and use some 3rd party library. One that I find very useful, expressive and simple to use is presented below:

public void shouldThrowNullPointerException() throws Exception {
    // given
    String importantParameter = null;

    // when and then
            () -> {
                try {
                    Objects.requireNonNull(importantParameter, "[importantParameter] should not be null");
                } catch (NullPointerException e) {
                    throw new RuntimeException("There was some problem with parameters", e);
            .hasMessage("There was some problem with parameters")
    // Note: cannot check the cause message
    //      .hasMessage("[importantParameter] should not be null");

I like this approach the most. Though, the API does not offer the possibility to validate the message of the inner exception (the one that actually may cause the issue), it has a nice feature of:
– wrapping the code under the test, and
– defining the expected behavior with the builder style pattern.