Hello World?

Hello World,

Is that a pun? I’ll let you figure that one out. This blog was actually created 6 years ago and required some renaming and tidying up but now it’s purpose is to track my progression through the Computer Science curriculum. Starting now, as I enter my Junior year at the local state University. This blog will hopefully be revived, not only as instructed by my professor, but to be continued on my own to learn more about the subject.

Advertisements

Differences in Integration Testing

Source: http://www.satisfice.com/blog/archives/1570

The blog post Re-Inventing Testing: What is Integration Testing? by James Bach gives an interview-like approach to exploring Integration Testing. It starts with a simple question of “What is integration testing?” and goes off from there. As the respondent answers, James leaves a descriptive analysis of the answer and what he is thinking at that point in time. The objective of the interview is to test both his knowledge and interviewee.

This was an interesting read as it seems related to a topic from a previous course that discussed coupling between which showed the degree of interdependence between software modules, which is why I chose this blog post. What I found interesting about this blog post would be his chosen interviewee was a student. So that entire conversation can be viewed from a teacher and student perspective. This is useful because it allows me to see how a professional would like an answer to be crafted and presented in a clear manner. For example, the interviewee’s initial answer is text-book like which prompted James to press for details.

Some useful information about integration testing is also provided because of this conversation. Integration testing is used during an interaction between multiple software are combined and tested together in a group. In this blog post, it is noted that not all levels of integration are the same. Sometimes “weak” forms of integrations exist, an example provided from the blog would be, when a system creates a file for another system to read it. There is a slight connection between the two systems due to them interacting with the same file. But as they are independent systems, neither of the two systems knows that the other exists.

From this blog post, I can tell that any testing requires much more than textbook knowledge on the subject. As mentioned briefly in the blog, there are risks involved with integrating two independent systems together and there is a certain amount of communication between the two systems. Depending on the amount of communication determines the level of integration between the two. The stronger the communication is between the two systems means that they are highly dependent on one another to execute certain functions.

Liskov Substitution Principle

Source: https://www.tomdalling.com/blog/software-design/solid-class-design-the-liskov-substitution-principle/

SOLID Class Design: The Liskov Substitution Principle written by Tom Dalling on Tomdalling.com is a five part series about the SOLID class design principles in OOP. He starts off with a problem about inheritance. For instance, if you had a penguin which is a bird that falls under an “is a” relationship. However, when the penguin inherits from the bird class, it will also inherit the fly method. As soon, as you set the fly function to do nothing, then it violates the LSP. Then Tom explains that from applying the Open Closed Principle, subclasses must follow the interface of the abstract base class. If the class has to be altered in such a way to account for certain classes then it also violates the Open Closed Principle of being able to extend a classes behavior without modifying it. In conclusion, two solutions are presented, one is adding a method to check for a flying bird or non-flying bird. The other which, he states as a better solution, would be to create separate classes to account for a flightless type, that way the fly method is not inherited from the superclass.

After doing assignments for CS-343 revolving around refactoring a pre-existing poorly implemented code by applying design patterns. I did not realize that we touched upon the Liskov Substitution Principle before reading the blog post. However, choosing this post, serves as a great source of review material upon topics covered in class. The assignment that incorporated multiple design patterns started off with a clear application of the LSP, as the original design had two instances of overriding to do nothing. However, the criteria for our first refactor requires the LSP and other inheritance reworks.  Since we were working with Ducks, you can imagine a QuackBehavior and a FlyBehavior, that incorporated real and inanimate ducks. So, the LSP application is similar to the second solution presented earlier, in which the fly and quack methods aren’t inherited from a superclass but rather an interface later implemented by the duck class. But even though the method still does nothing, it isn’t an override method, so it does not violate the LSP.

Like other OOP principles, these exist to help achieve code that is maintainable and reusable. By acknowledging the existence of these SOLID class design principles, it will hopefully prevent future projects from projecting these type of code smells. Also, by understanding them and utilizing them wherever they suit best, my code will be cleaner, easier to maintain and add new features.

Don’t be an Outlaw

Source: https://haacked.com/archive/2009/07/14/law-of-demeter-dot-counting.aspx/

The Law of Demeter Is Not A Dot Counting Exercise by Phil Haack on hacked.com is a great read on the applications of the Law of Demeter. Phil starts off by analysis of a code snippit to see if it violates the “sacred Law of Demeter”. Then proceeds to give a short briefing of the Law by referencing a paper by David Bock. He then proceeds to clear up a misunderstanding or usage of the Law of Demeter by people who do not know it well, hence the title of his post. “Dot counting” does not necessarily tell you that there is a violation of the law. He closes out with an example by Alex Blabs that when you apply a fix to multiple dots in one call, you effectively lose out on code maintainability. Lastly, he explains that digging deeper into new concepts is all and well, but being able to explain disadvantages alongside the advantages will show a better understanding of the topic.

Encapsulation as a concept introduced to me, is about encapsulating what varies. However, different applications like the Law of Demeter which is specific to methods. It is formally written as “Each unit should have only limited knowledge about other units: only units “closely” related to the current unit”. The example in the paper by David Bock makes it easy to understand where this is coming from with the Paperboy and the Wallet example. Having methods that have access to more information is unnecessary and should be left out. Also, letting the method have direct access to changes made by another method is a bad idea. By applying the Law of Demeter, you encapsulate this information which simplifies the code in one class but increases the complexity of the class. Overall, you end up with a product that is easily maintainable in a sense where if you change values in one place, it will apply across the board to where it’s used.

Although encapsulation is not a new topic, knowing how to properly apply encapsulation for methods through knowing the Law of Demeter should be a good practice. This would be remembering that “a method of an object may only call methods of the object itself, an argument of the method, any object created within the method, and any direct properties/fields of the object”. For example, knowing that applying the Law of Demeter to a chained get statements is a good idea. Also, the importation of many classes that you won’t use is a bad idea. With this understanding, although incomplete, I will hopefully avoid violating the Law of Demeter and share it with my fellow colleagues.

What is C4?

Source: http://codermike.com/starting-c4

Getting Started with C4 by Mike Minutillo on Coder Mike is a blog post introducing the C4 Model. The C4 model is a way of allowing users to communicate and describe software architecture. This model is composed like a software system that is made up of containers, that each have components that are then implemented by classes. Lastly, context which is basically a description of the parts of the system or their relationships between each other. These four, context, containers, components, and classes is what makes up C4. Furthermore, Mike goes in-depth in explaining how it helped his situation from starting from a blank white board. By starting with a context diagram, he is able to map out the different problems of their design and revise it for the next version. This process would be repeated until they are satisfied and can continue down the right path.

Tools are developed over time to solve common issues faced in society today, by not utilizing existing tools provided by to us today would be a waste. Learning about UML Diagrams, would eventually lead into learning C4 Model. This model also utilizes UML diagrams in the classes section for its intended use. The idea of being able to clearly represent and describe parts of a software system like the UML diagrams purpose for classes makes this a worthwhile read. The various models within the model such as System context is extremely valuable to me because it allows me to explain how everything pieces together from a higher-level view.

Much like UML diagrams, the author stated that it your initial diagram doesn’t have to be perfect, it’s going to change over time. The value of being able to show, explain, and revise is much better than time spent working out a grandeur solution. Utilizing the different diagrams will grant a much better time spent on projects certainly when there is no clear starting point like Mike pointed out. First starting with Context diagram will explain what things do and their relationships to each other. Then we can go into the container diagram, where you separate important bits into parts like a database or an app. From there I can then start grouping related functions into components which is then accompanied by classes explained using UML diagrams. These four diagrams will provide a nice guide for myself and group members during implementation and provides proper documentation.

Leaving a trail…

Source: https://blog.codinghorror.com/if-it-isnt-documented-it-doesnt-exist/

“If It Isn’t Documented, It Doesn’t Exist” are probably some words to live by, written by Jeff Atwood. Jeff expresses his thoughts on proper documentation based on his personal experience while working on open source projects. This can be summarized into a single sentence “Good Documentation is hard to find” as stated in the blog. However, he agrees on a couple of key points made by James Bennett who wrote the blog post Choosing a JavaScript Library. These can be summarized to having a proper overview of each section of your project/design, having proper examples of usage when needed, documentation on everything, and your regular comments throughout the code itself. Although it was specifically written for his javascript explanation, I attempted to apply it to regular java coding as well. Ultimately leading up to another great statement “most treat documentation like an afterthought” made by Mr. Bennett.

Truthfully, upon finishing and reviewing my own code for Assignment 1 in my CS-343 class, I realized that documentation or comments within the code is non-existent. Leading up to relearning the importance of proper documentation, as once taught in my CS-140 class. Currently, I can see that it is not a requirement and has not been a big factor in assignments even in other previous CS courses I have taken between CS-140 and CS-343. However, it’s important to remember that a properly documented project can be easily benefit yourself and others in many ways. For example, it can be used to assist in explanations or allow others to understand what is done at a certain point. Also, it can be used to help yourself pick up where you left off after reading up on what you have written. Simply stated as “…if you’re the only one who understands it, it doesn’t do any good” by Nicholas Zakas.

Practicing proper documentation techniques early on will help develop proper skills for determining how much is necessary to document. Too much unnecessary documentation will hurt more than documenting only what is needed, which is a problem I had when I actually applied documentation within projects. This also includes being able to understand when and where to use comments, javadocs, etc. throughout a project. Currently, I do treat documentation like an afterthought, to the point that it isn’t applied. In the future, I hope to apply this skill and use it to my advantage, not only for myself but for others as well.

Ouch, that’s sharp!

Source: https://blog.codinghorror.com/flattening-arrow-code/

Stumbling across this week’s blog post Flattening Arrow Code by Jeff Atwood, I found myself reading about arrow code, which is self-explanatory. Basically, Jeff explains how he deals with Arrow Code, which is considered an Anti-pattern that is deemed a bad practice and should promptly be avoided wherever possible. The main benefit from refactoring is to reduce the cyclomatic complexity value, this value being used to determine how complex a piece of code is. The larger the value, the higher the complexity, the lower the value, the lower complexity. Another would be, from Jeff’s words “…to have code that scrolls vertically a lot… but not so much horizontally.”

When I look at code, I never suspect that there would be distinct patterns that are deemed as a bad practice. By reading about arrow code, I first thought, “Hey, I’ve seen that before” but left with “Oh, that is probably not the way to go about writing code”. In other words, I should probably learn this now. Anti-patterns, although the only one focused on in this blog is the arrow pattern is a great way to help avoid certain practices in the future. Even though, as of right now, I don’t understand most of what he said about converting negative checks into positive checks or decomposing conditional blocks into separate functions. Other things like guard clauses, being a conditional statement at the top of a function that bails out as soon as it can. Which I assume contributes to reducing time spent/resources spent running that function. Also, includes the idea of not sticking to the idea that there should only one exit point at the bottom of the function. Sometimes, it can be possible to exit at a different point rather than the very bottom. These two points, are good information as standalone practices.

Using this simple example, I can see that it is like YAGNI, KISS, Code Smells, etc., which means by understanding or being introduced to the bad practices early on. One, I am able to identify and solve the problems using common strategies already developed and tested. Two, I will be able to avoid putting myself into those situations where I would need to refactor the sections to be easily modifiable and maintainable. Although cyclomatic complexity is mentioned scarcely in the blog post, it reminded me of time complexity in algorithm analysis. Even though they are two different concepts, understanding that aiming for a smaller cyclomatic complexity value should result in a less extensive testing phase for the code. Which should save time, for future reference of course.