# Partial Fraction Decomposition!

In Calculus when working with Integrals, Laplace Transforms, etc, there’s a process called Partial Fractions Decomposition that helps a lot when we’re doing something and involves rational functions. To begin with it’s important to understand the definition of Proper Fraction and Improper Fraction.

A Proprer Fraction is a fraction whose numerator is smaller that the denominator, here’s the definition: While a Improper Fraction is exactly the opposite, a fraction whose numerator is bigger than the denominator, and this kind of fractions can be a problem when working with the Partial Fraction Decomposition method, sometimes we’il need to to a simple polinomial division in order to write it as an proprer fraction subject to a Partial Fraction Decomposition method. Here we have the definition a Improper Fraction: There’s a fundamental moto that is important to get: “Any Rational Proper Function can be decomposed in a sum of simple fractions”. For instance: An example of a rational function that isn’t proper is this one we’il work later on: So in the most common case when we can decompose the denominator in simple factors (factors without multiplicity bigger than 1) we generalize the Partial Fraction Decomposition method this way: When we have a denominator decomposed into factors with some kind of multiplicity we simply repeat the factors: ## Proper Fractions

The better way of understanding it is by practising lets do it: Notice that i factored the denominator by using the Sum/Product method, but you can do it as you want, for instance, using Bhaskara’s Formula. Now we just have to find the least common multiple between the two partial fractions and simplify them: Now we have to work with this iguality: In order to find values of A and B, we have to let ‘x’ be the roots of the expressions multiplying A and B, once at time. To find B, let x=-2: Now in order to find A, we do the same process, let x=1: So finally we have: ## Improper Fractions

As said before when having an improper fraction, a fraction whose numerator is greater than the denominator we have to proceed to a polinomial division, to understand how we can write a fraction using the division quocient, the rest and the divisor:

# Riemann Sum Explained!

Today we’re going to talk about how Riemann found a way of calculating the area under the graphic of the function and how it later became known as an “Integral”. At First Riemann hadn’t worked with integrals yet, he thought that would be a good aproximation of the area under a curve if he used rectangles. As we know the area of a rectangle is “base x height”, and obviously the many rectangles you use the better aproximation you’il get. I’il explain the Lower and Upper Sum concepts later on. Now lets think about a function, it doesn’t matter which, to make our life easier lets take a continuous function: ‘P’ a partition of ‘f’, and pick and take an interval ‘I’, and then pick a ‘Ci’ from that interval:   To calculate the width of the rectangles we’re going to use we have to subtract the lower bound from the upper bound, and divided it by the ammount of rectangles (‘n’) we are using, we call it “delta-x”: Here’s the deal, now “Ci” can be used in two different ways, we can be using the left or right endpoints of the rectangles, the left-endpoint is represented by (i-1), the right-endpoint by (i). If the function is decreasing the left-endpoints will over-estimate the area under the curve while the right-endpoints will under-estimate it. On the other hand, if the function is increasing as in the first example image, the left-endpoint will under-estimate the area under the curve while the right-endpoint will over-estimate it. Here we have a definition, using the left, and right endpoints respectively:  Having this we can formulate our aproximation of the area under the curve using the sigma notation: Of course that when using this notation we only get an aproximation, so Riemann thought that it would be cool to use many rectangles as possible, the number of rectangles is given by ‘n’, so Riemann used a pre-calculus concept called “limit”. Throwing the number of rectangles to infinite will give us the exact value of the sum that later would be seen as the integral from “a to b of f(x)” (we’il see that later): Before working on an example we have some summations that we have to keep in mind:   Lets do an exercise:  The area under the curve f(x) is 28/3. This process is really though and rigorous, failing a value can change all the calculations and cause an error. So Riemann thought about introducing a new concept, called an “Integral”.

An Integral is called the anti-derivative of a function because the integral of a function is exactly the funcion that if you derivate you would get the original function: We call F(x) to be the Primitive of f(x), using this Riemann represented his sum using the integral notation: I won’t show the integratin process because first i have to write about primitives yet.

Thats all.

Ricardo Santos

# Leibniz π Formula!

Back in XVII century the well-know German mathematician Gottfried Wilhelm Leibniz did manage to find a huge precise aproximation of hundreds of digits of π, using power-series. I can talk about power-series on further posts, but on this one i’il only demonstrate how Leibniz did manage to do this aproximation.

Here we have the serie that aproximates π: Leibniz thought that it would be cool trying to use the arctan function in order to find a power-series that would represent π. You can’t really find the power-series directly from arctan(x), what Leibniz thought was to derivate arctan(x) in order to find an expression similar to a Geometric-Series Sum (Geometric Progression).  Here’s  the formal definition of how we can represent a Geometric Sum: In our case, ‘a’ is equal to 1, and ‘r’ is equal to (-x²), therefore we can write the arctan(x) derivate as a power-series: We can write it out as as Leibniz-Alternated Series: Verifying this iguality means that we have to proove the convergency interval of the power-series, I won’t do it here because this post is about Leibniz π Formula, not about finding the nature of series.
So, right know, we know that the arctan(x) derivative can be represented as a power-series, now we know how to integrate power-series, it works like an imediate primitive. The integrate Leibniz-Alternated Series above will represent our initial function arctan(x). It means that: Leibniz thought about the angle θ that tan(θ) = 1, which is π/4: Then:  Leibniz itself, explained all this stuff using some geometry concepts: Doing some geometry, we find out that integrating by parts we have this: (To be continued…)
Ricardo Santos

happy new

# Fun Fact

The term Byte did not always want to represent an 8-bit set. In 1964 was implemented a microcomputer called PDP-7 whose 1 Byte was equivalent to 9 bits.

Pay attention to the nomenclature.

Ricardo Santos

# Working with UNIX Files

You’ve probably come across the situation of wanting to read some data from a file, or writing something to it, right? When we work in C, we can use a low-level File Descriptor type (which I’ve told about it several times here), which is FILE. The implementation of C language files dictates that there are two of these: Binary and Text.  Here is a program that lets you read the contents of a text file and print it as output. Note that we first import <stdio.h>, so we have access to functions like printf (), and file manipulation functions. After that we had to create a file descriptor that would be of medium to access the file in question. As such, we create an array to store the read value, finally. From the fopen() function, we open the file, and check if this was possible, otherwise, print error, and quit the program as “EXIT_FAILURE” (which is a Macro for value 1). Finally we read the contents of the file, the printamos and close the descriptor. If you notice, here we write in the file “test.txt” the String “Hello” from fprintf(). In addition to text files, you may always want to read or write to a binary file, for example, an ELF. To do this, we must use the following functions:

```size_t fread(void *ptr, size_t size_of_elements,
size_t number_of_elements, FILE *a_file);

size_t fwrite(const void *ptr, size_t size_of_elements,
size_t number_of_elements, FILE *a_file);```

To be continued…

Ricardo Santos

Sant

# Multi-Processing Initialization Protocol

I’ve already written a bit about MP (Multi-Processing) here on my Blog, and as you may know, unless it is used, the execution of an application will always be performed in a given period of time in a certain CPU processor . This processor during the running time of a computer until a reset, will always be the same, but how will it be that this is chosen?

We will note later that all this is related to the LAPIC in the CPU (which exists in each Core of the same), and with a register that has, called, APIC ID Register, which is assigned a value during the Boot of the computer. The protocol that allows during boot, choosing which logical processor will be the main one is the “MP Initialization Protocol” (MPIP). MPIP defines two distinct types of logical processor classes: The BSP (Bootstrap Processor), and all other APs (Application Processors). In short, during initialization, this protocol chooses which of the logical processors will be the one used during the entire execution time until the next reset, that is, the Bootstrap Processor (BSP), and all others that were in a HALT state, ie “Sleep”, the Application Processors (APs).

Note: All this Logical Processors, communicate each other by an ICC (Inter-Communication Channel).

It is necessary to define the processor to be used (BSP), during the Boot, to execute the BIOS code, that is, boot-strap code to configure the APIC environment, sets up system-wide data structures, and starts and initializes the APs. When the BSP and APs are initialized, the BSP then begins executing the operating-system initialization code. And finally, how does the main processor in question know that it is the BSP? It is quite simple, there enters the APIC that will have Flag BSP set in the IA32_APIC_BASE MSR register. This defines the logical processor that is the BSP.

Note: Something I will leave for another article is the way these processors communicate with each other, and the importance of the Scheduler when we talk about MP.

Ricardo Santos

# Extended Feature Enable Register

Extended Feature Enable Register, or EFER, is a Register of type MSR (Model-Specific Register, I’ll explain in another article). I write about the EFER to complement an article I wrote earlier on System Calls, since from this register we can “activate” the pair of SYSENTER / SYSEXIT instructions, as well as we can enter or leave the CPU Long Mode.

The structure of EFER is as follows: Note: To access this register, just like all other MSRs, you must use EDX, the EFER value is 0xC0000080. EFER is considered a contole register implemented only from x64 architectures.

The first bit, bit 0, when set, allows you to enable syscalls by the pair of instructions: SYSCALL / SYSEXIT, on Intel CPUs, and SYSCALL / SYSRET, on AMD CPUs.

Ricardo Santos

# Fast Tip: Compiling without main()

A quick, and often precious tip, is how super-simple to use GCC to compile an application without containing the main () function. If you try to compile “normally”, in a default way, a code in C without the main () function, you will get the following response:  You can see that the whole compilation process was not performed, since in the linking part, it was impossible to add a crt1.o file to the code, due to the lack of reference of the main () function. It turns out that if we want to compile a program without using the main () function, we just add a parameter (which in GCC is called “Flag”) to the GCC execution line, which is ‘-c’. This flag will allow the compilation to be performed successfully, since it will skip the linking phase of the application.

Note that with this you can not use any kind of functions in the program, neither constructors nor destructors, which makes sense, since there is no main ().

Here’s the compilation: And notice for yourself that we are not given any kind of complaint from the GCC.

Ricardo Santos

# System Calls in Assembly x64

In this article I will explain briefly (since I already did in another article) what is a System Call. A System Call, known as “syscall,” is a method of producing an application access interface from your User Space to the Kernel Space to gain access to certain system resources.

From x64 architectures, the availability of an instruction that does this has been implanted in the CPUs, leaving aside the old way to put an application in Ring 0 (Kernel Space) that was the use of a emulated syscall from a known as “INT 0x80” and the crap of Windows as “INT 2E”. This new instruction is SYSCALL, and allows to accomplish the same thing as the emulated syscall but much more efficiently. Very briefly, SYSCALL is divided into 2 other statements, called SYSENTER and SYSEXIT (this is in Intel architectures), obviously the first one executes syscall and the second one exits it (Enter and Exit). When an application is in the Kernel Space, its Stack can not be that of the UserSpace, as such, SYSCALL loads and associates a new Stack to the process while it is in the Kernel Space (usually this Stack is located in Registers that contain definitions of the CPU, I’ll tell you about them in another article). Obviously the code in User Space has to save the current process context before entering Ring 0.

First before we implement this instruction, it is important to check the availability of SYSCALL on our CPU. As such we will use our well-known CPUID instruction:

```mov eax, 1
cpuid
test edx, 0x800

The eleventh bit of edx will be checked, and if ZF=0, then SYSCALL is suported.```

According to the Intel documentation, all parameters passed to SYSCALL must be passed by Registers, not by Stack.

RDI, RSI, RDX, R10, R8, R9 – In these 6 registers must be passed the parameters of the executed syscall, note that they will have to be passed in this specific order of Registadores.

RAX – Used to indicate the syscall number that will be executed. (Note that the value that will be put in RAX or EAX to indicate the function will derive from the use of the SYSCALL statement for the use of a statement emulated as int 0x80).

(Note: R11 Recorder should not be used in syscalls because the EFLAGS context will be stored in it during the callback).

The syscall return will be put in RAX as a linear address for the pointer of that function (at least for some functions). If the execution of syscall generates some error, it will also be placed in RAX as a return value.

Going back, apart from passing the syscall number by RAX, you can get a list of syscalls in the directory and “arch / x86 / entry / syscalls / syscalls_32.tbl” and “arch / x86 / entry / syscalls / syscalls_64” files. tbl “(for 32- and 64-bit architectures respectively).

Note: It is not necessary to use 64-bit registers, depending on the situation we can use the 32-bit registers.

Here is a famous routine, which I think I have been able to optimize to the maximum (as I wrote in an article), which starts Hello World as an output:

```bits 64
default rel
section .rodata

msg:
db `Hello!\n`
msg_len equ \$ - msg

section .text
global _start
_start:
mov eax,1
mov edi,eax
lea rsi,[msg]
mov edx,msg_len
syscall

mov eax,60
xor edi,edi
syscall```

Note how all parameters are passed to SYSCALL in the same way that they are passed in the C calling convention in x64 environments.

Ricardo Santos