Hello World?

Hello World,

Is that a pun? I’ll let you figure that one out. This blog was actually created 6 years ago and required some renaming and tidying up but now it’s purpose is to track my progression through the Computer Science curriculum. Starting now, as I enter my Junior year at the local state University. This blog will hopefully be revived, not only as instructed by my professor, but to be continued on my own to learn more about the subject.

Advertisements

Path or No Path!

Source: http://www.professionalqa.com/path-testing

This week’s reading is about Path Testing. It is said that a vital part of software engineering, is to ensure that proper tests are provided such that errors or issues that exist within a software product will be resolved before it could turn into a potential costly threat to the product. By turning to path testing, it will help evaluate and verify structural components of the product to ensure the quality meets the standards. This is done by checking every possible executable path in the software product or application. Simply put, another structural testing method when provided with the source code. By this method, many techniques are available, for example, control flow charts, basis path testing, and decision to decision path testing. These types of testing include its fair share of advantages and disadvantages. However, path testing is considered a vital part of unit testing and will likely improve the functionality and quality of the product.

What I found thought-provoking about the content is the section on the significance of Path. By providing an understanding what the term “path” means will certainly break down the importance of this test. Knowing that path is likely describing a programs execution path, from initialization to termination. As a white-box testing technique, we can be sure that the tests cover a large portion of the source code. But it’s also useful that they acknowledge the problems that can be found while doing path testing. These errors are either caused by processes being out of order or code that has yet to be refactored. For example, leftover code from a previous revision or variables being initialized in places where they should not be. Utilizing path testing will reveal these error paths and will greatly improve the quality of the code-base. Also, I agree that path testing like most white-box testing techniques will require individuals who know the code-base well enough to contribute to these types of tests. Which also includes another downside where it will not catch other issues that can be found through black-box testing. This article allows me to reinforce what I had learned in class about Path Testing and DD-Path Testing.

Black-Box vs White-Box Testing

Source: https://www.guru99.com/back-box-vs-white-box-testing.html

This week’s reading is about the differences between black-box and white-box testing. For starters, it states that in black-box testing, the tester does not have any information about what goes on inside the software. So, it mainly focuses on tests from outside or on the level of the end-user. In comparison, white box testing allows the tester to check within the software. They will have access to the code, in this case, another name for white-box testing is code-based testing. In this article, the differences in each type of test is listed in a table format. Let it be known, that the bases of testing, usage, automation, objective and many other categories will be different. For example, black-box testing is stated to be ideal for testing like system testing and acceptance testing. While white-box is much more suited for unit-testing and integration testing. The many advantages and disadvantages of each method are clearly defined and provides a clear consensus on how each method will pan out.

What I found useful about this article is the clear and concise language it uses for describing each category for each category. Unlike other articles I’ve come across about the topic, they beat around the bush and make it difficult to discern the importance of each type of testing. Many of the information provided by this article can be supported by activities done in class. One of the categories labeled time labeled black-box testing as less exhaustive and time-consuming, while white-box is the very opposite. I somewhat agree with this description as with white-box testing, you will have much more information to work with. Every detail of code can in some way be processed into a test as deemed necessary. The overall quality of the code as stated, is being checked during this test. While in black-box testing, the main objective is to test the functionality, which means it’s not as extensive of a test in general. Also, what struck as interesting was the category Granularity. With a single google search, yielded the meaning “the scale or level of detail present in a set of data”. Low for black-box and high for white-box, which rings true for both tests. In conclusion, this article reinforces prior knowledge on the differences between black-box and white-box testing.

Dynamic Test Process

Source: https://www.guru99.com/dynamic-testing.html

This week’s reading is about a dynamic testing tutorial written by Radhika Renamala. It is stated to be a software testing technique where the dynamic behavior is being parsed. An example provided is a simple login page which requires input from the end user for a username and password. When the user enters in either a password or username there is an expected behavior based on the input. By comparing the actual behavior to the expected behavior, you are working with the system to find errors in the code. This article also provides a dynamic testing process in the order of test case design and implementation, test environment setup, test execution, and bug reporting. The first step is simply identifying the features to be tested and deriving test cases and conditions for them. Then setup the tests to be executed, execute the tests then document the findings. By using this method, it can reveal hidden bugs that can’t be found by static testing.

This reading was interesting because I thought it was a simple test process than what is written in this article. Personally, I thought by randomly inputting values and observing the output would be sufficient. However, the simplified steps aren’t as simple as there are necessary considerations. In the article, there is a warning given to the reader by the author that other factors should be considered before jumping into dynamic testing. Two of the most important would be time and resource as they will make or break the efficiency of running these tests. Unlike static testing as I have learned, which is more based around creating tests around the code provided by the user. This allows them to easily create tests that are clearly related to the code. But this does not allow them to think outside the box where dynamic testing allows testers to do so. This type of testing will certainly create abnormal situations that can generate a bug. As stated in the article, this is type of testing is useful for increasing the quality of your product. By reading this article, I can see why the author concluded that using both static and dynamic in conjunction with each other is a good way to properly deliver a quality product.

Developer Ego Begone!

Source: https://blog.lelonek.me/how-should-we-do-code-reviews-ced54cede375

This week’s reading is a blog about conducting code reviews properly by Kamil Lelonek. It gives a general overview of code review as a process of giving feedback about another person’s code. By utilizing this process of rejecting and approving certain changes to the codebase, it will generate improvements as a whole. However, it goes much further beyond code review as it is not as simple as it seems. Even though benefits such as catching bugs early and ensuring that the code is legible and maintainable moving forward. The post makes it important to realize that developers are very protective about the code that they write and will attempt to defend against criticism. So, it provides different approaches to mitigating problems that could rise while undergoing code review. The benefits of these approaches should be able to correctly reach out without appearing as a threat. Some of these techniques would be to distinguish opinions from facts, avoiding sarcasm, and being honest with yourself. Also, by understanding the ten tips provided, it should make code reviews more effective for everyone involved.

What I found interesting about the article is how straight forward it is towards addressing one’s ego. They are right when they say that developers would like to say that they have wrote good code but sometimes they need to leave the ego behind them. By not opening themselves to criticism and addressing it as threats is detrimental to the team as a whole. Also, when actively code reviewing, I can see that providing evidence for nitpicking at certain lines of code should make it easier for the reviewee to understand what you are addressing specifically. However, I believe that avoiding jokes and sarcasm should be remembered as top priority. Especially when you are reviewing code for a friend. Recalling from personal experience, I do believe I did not help my peer to the best of my abilities by using sarcasm during code review. This also applies to distinguishing opinions from facts where sometimes through practice, you are led to believe that one technique is better than another. In conclusion, these tips are great for improving code review sessions.

Test Automation, are you doing it right?

Source: https://www.softwaretestinghelp.com/automation-testing-tutorial-1/

In this week’s reading was about a test automation tutorial. It defined test automation as a technique to test and compare the actual outcome with expected outcome. Mainly used for automating repetitive tasks which are difficult to perform manually. Test automation allows testers to achieve consistent accuracy and steps towards testing. This in return would reduce overall time towards testing the same thing over and over. As the tests should not be obsolete, it would allow new tests to be added on top of the current scripts when a product evolves. They also suggest that these tests should be planned so that maintenance will be minimal, otherwise time will be wasted when fixing automation scripts. The benefits are huge but there will be challenges, risks, and other obstacles. Such as knowing when not to automate and turn to manual testing which would allow a more analytical approach towards certain situations. Which is directly related to the perception that if no bugs are introduced if automation scripts are running smoothly. It is concluded that test automation is only right for certain types of tests.

I found this tutorial to be incredibly helpful as it provided real-life situations as examples for many of the topics covered. It is effective at making the user see the reality behind test automation, through the five W’s – who, what, when, where, and why – even not stated explicitly. I can conclude that I took test automation for granted as I assumed that all tests would be automated regardless. That way of thinking is a wrong step for a tester to make, as not all bugs can be discovered through pre-defined tests in static test cases. Manual testing is necessary to be able to nudge bugs to appear through manual intervention as it pushes the limits of the product. Overall, the main take away for myself would be the planning phase of test automation. By splitting different tests into different groups, we can easily set a path for testing in an ordered way. For example, it would be best to do tests for basic functionality then integration before testing certain features and functionalities. It would logically be more difficult to solve complex bugs before smaller bugs. It goes to show that test automation is not as easy as it looks.

Differences in Integration Testing

Source: http://www.satisfice.com/blog/archives/1570

The blog post Re-Inventing Testing: What is Integration Testing? by James Bach gives an interview-like approach to exploring Integration Testing. It starts with a simple question of “What is integration testing?” and goes off from there. As the respondent answers, James leaves a descriptive analysis of the answer and what he is thinking at that point in time. The objective of the interview is to test both his knowledge and interviewee.

This was an interesting read as it seems related to a topic from a previous course that discussed coupling between which showed the degree of interdependence between software modules, which is why I chose this blog post. What I found interesting about this blog post would be his chosen interviewee was a student. So that entire conversation can be viewed from a teacher and student perspective. This is useful because it allows me to see how a professional would like an answer to be crafted and presented in a clear manner. For example, the interviewee’s initial answer is text-book like which prompted James to press for details.

Some useful information about integration testing is also provided because of this conversation. Integration testing is used during an interaction between multiple software are combined and tested together in a group. In this blog post, it is noted that not all levels of integration are the same. Sometimes “weak” forms of integrations exist, an example provided from the blog would be, when a system creates a file for another system to read it. There is a slight connection between the two systems due to them interacting with the same file. But as they are independent systems, neither of the two systems knows that the other exists.

From this blog post, I can tell that any testing requires much more than textbook knowledge on the subject. As mentioned briefly in the blog, there are risks involved with integrating two independent systems together and there is a certain amount of communication between the two systems. Depending on the amount of communication determines the level of integration between the two. The stronger the communication is between the two systems means that they are highly dependent on one another to execute certain functions.