Skip to main content

Enhancing Indoor Air Quality: A Guide to Better Health and Comfort

In today's world, where we spend a significant amount of our time indoors, the quality of the air we breathe inside our homes and workplaces is crucial for our health and well-being. Poor indoor air quality (IAQ) can lead to various health issues, including allergies, respiratory problems, and even long-term conditions. This blog post explores effective strategies for managing and improving indoor air quality. Understanding Indoor Air Pollutants Indoor air pollutants can originate from various sources: Biological Pollutants: Mold, dust mites, and pet dander. Chemical Pollutants: Volatile organic compounds (VOCs) from paints, cleaners, and furnishings. Particulate Matter: Dust, pollen, and smoke particles. Strategies for Improving Indoor Air Quality Ventilation: Natural Ventilation: Open windows and doors regularly to allow fresh air circulation. Mechanical Ventilation: Use exhaust fans in kitchens and bathrooms to remove pollutants directly at the source. Air Purifiers: HEPA Filt

Cryptography as a security tool

Cryptography as a Security Tool
* Within a given computer the transmittal of messages is safe, flexible and secure, because the OS knows exactly where each one is coming from and where it is going.
* On a network, however, things aren't so straightforward - A rogue computer ( or e-mail sender ) may spoof their identity, and outgoing packets are delivered to a lot of other computers besides their ( intended ) final place, which brings up two big questions of security:
 Trust - How can the system be sure that the messages received are really from the 
source that they say they are, and can that source be trusted?
• Confidentiality - How can one ensure that the messages one is sending are 
received only by the intended recipient?
* Cryptography can help with both of these problems, through a system of secrets and keys. In the former case, the key is kept by the sender, so that the recipient knows that only the authentic author could have sent the message; In the latter, the key is 
held by the receiver, so that only the intended receiver can receive the message accurately.
* Keys are designed so that they cannot be divined from any public information, and must be guarded carefully. (Asymmetric encryption involves both a public and a private key.)

Encryption
* The basic idea of encryption is to encode a message so that only the desired recipient can decode and read it. Encryption has been around since before the days of Caesar, and is an whole field of study in itself. Only some of the more consequence computer encryption schemes will be covered here.
* The basic process of encryption is shown in Figure and will form the basis of most of our discussion on encryption. The steps in the procedure and some of the key terminology are as follows:
1. The sender first creates a message, m in plaintext.
2. The message is then entered into an encryption algorithm, E, along with the encryption key, Ke.
3. The encryption algorithm generates the cipher text, c, = E (Ke) (m). For any key k, E (k) is an algorithm for generating cipher text from a message, and both E and E (k) should be efficiently computable functions.
4. The cipher text can then be sent over an unsecured network, where it may be received by attackers.
5. The recipient enters the cipher text into a decryption algorithm, D, along with the decryption key, Kd.
6. The decryption algorithm re-generates the plaintext message, m, = D (Kd) 
(c). For any key k, D (k) is an algorithm for generating a clear text message from a cipher text, and both D and D (k) should be efficiently computable functions.
7. The algorithms mentioned here must have this important property: Given a 
cipher text c, a computer can only compute a message m such that c = E 
(k) (m) if it possesses D (k). (In other words, the messages can't be decoded except you have the decryption algorithm and the decryption key.)
Symmetric Encryption
* With symmetric encryption the same key is used for both encryption and decryption, and must be carefully. There are a number of well-known symmetric encryption algorithms that have been used for computer security:
• The Data-Encryption Standard, DES, developed by the National Institute of Standards, NIST, has been a standard civilian encryption standard for over 20 years. Messages are broken down into 64-bit chunks, each of which is encrypted using a 56-bit key through a series of substitutions and transformations. Some of the transformations are hidden (black boxes ), and are classified by the U.S. government.
• DES is known as a block cipher, because it works on packs of data at a time. Unfortunately this is vulnerability if the same key is used for an extended amount of data. Therefore an enhancement is 
to not only encrypt each block, but also to XOR it with the before block, in a technique known as cipher-block chaining.
• As modern computers become faster and faster, the security of DES has lowered, to where it is now considered insecure because its keys can be exhaustively searched within a reasonable amount of computer time. An enhancement called triple DES encrypts the data three times using three separate keys (actually two encryptions and one decryption) for an effective key length of 168 bits. Triple DES is in widespread use today.
• The Advanced Encryption Standard, AES, developed by NIST in 2001 to replace DES uses key lengths of 128, 192, or 256 bits, andencrypts in blocks of 128 bits using 10 to 14 rounds of transformations on a matrix formed from the block.
• The two fish algorithm uses variable key lengths up to 256 bits and works on 128 bit blocks.
• RC5 can vary in key length, block size, and the number of transformations, and runs on a wide variety of CPUs using only basic computations.
• RC4 is a stream cipher, meaning it acts on a stream of data rather than blocks. The key is used to seed a pseudo-random number generator, which generates a key stream of keys. RC4 is used in WEP, but has been detect to be breakable in a reasonable amount of computer time.

Asymmetric Encryption
* With asymmetric encryption, the decryption key, Kd, is not the same as the encryption key, Ke, and more importantly cannot be derived from it, which means the encryption key can be made publicly available, and only the decryption key needs to be kept secret. (or vice-versa, depending on the application.)
* One of the most widely used asymmetric encryption algorithms is RSA, named after its developers - Rivest, Shamir, and Adleman.
* RSA is based on two large prime numbers, p and q, (on the order of 512 bits each), and their product N.
• Ke and Kd must satisfy the relationship:
( Ke * Kd ) % [ ( p - 1 ) * ( q - 1 ) ] = = 1
• The encryption algorithm is:
    c = E(Ke)(m) = m^Ke % N
• The decryption algorithm is:
    m = D(Kd)(c) = c^Kd % N
* An example using small numbers:
• p = 7
• q = 13
• N = 7 * 13 = 91
• ( p - 1 ) * ( q - 1 ) = 6 * 12 = 72
• Select Ke < 72 and relatively prime to 72,     say 5
• Now select Kd, such that ( Ke * Kd ) % 72     = = 1, say 29
• The public key is now ( 5, 91 ) and the         private key is ( 29, 91 )
• Let the message, m = 42
• Encrypt: c = 42^5 % 91 = 35
• Decrypt: m = 35^29 % 91 = 42
* Note that asymmetric encryption is much more computationally expensive 
than symmetric encryption, and as such it is not normally used for large 
transmissions. Asymmetric encryption is suitable for small messages, 
authentication, and key distribution, as covered in the following sections.

Authentication
* Authentication involves verifying the identity of the entity that transmitted a message.
* For example, if D (Kd) (c) produces a valid message, then we know the sender was in possession of E (Ke).
* This form of authentication can also be used to verify that a message has not been modified
* Authentication revolves around two functions, used for signatures ( or signing), and verification:
• A signing function, S (Ks) that produces an authenticator, A, from any given message m.
• A Verification function, V (Kv, m, A) that produces a value of "true" if A was created from m, and "false" otherwise.
• Obviously S and V must both be computationally efficient.
• More importantly, it must not be possible to generate a valid authenticator, A, without having possession of S (Ks).
• Furthermore, it must not be possible to divine S (Ks) from the combination of (m and A), since both are sent visibly across 
networks.
* Understanding authenticators begins with an understanding of hash functions, which is the first step:
• Hash functions, H (m) generate a small fixed-size block of data known as a message digest, or hash value from any given input data.
• For authentication purposes, the hash function must be collision resistant on m. That is it should not be reasonably possible to find an alternate message m' such that H (m') = H (m).
• Popular hash functions are MD5, which generates a 128-bit message digest, and SHA-1, which generates a 160-bit digest.
* Message digests are useful for detecting (accidentally) changed messages, but are not useful as authenticators, because if the hash function is known, then someone could easily change the message and then generate a new hash value for the modified message. Therefore authenticators take things one step further by encrypting the message digest.
* A message-authentication code, MAC, uses symmetric encryption and decryption of the message digest, which means that anyone capable of verifying an incoming message could also generate a new message.
* An asymmetric approach is the digital-signature algorithm, which produces 
authenticators called digital signatures. In this case Ks and Kv are separate, Kv is the 
public key, and it is not practical to determine S (Ks) from public information. In practice the sender of a message signs it ( produces a digital signature using S(Ks) ), and the receiver uses V(Kv) to verify that it did indeed come from a trusted source, and that it has not been modified.
* There are three good reasons for having separate algorithms for encryption of messages and authentication of messages:
• Authentication algorithms typically require fewer calculations, making verification a faster operation than encryption.
• Authenticators are almost always smaller than the messages, improving space efficiency. (?)
• Sometimes we want authentication only, and not confidentiality, such as when a vendor issues a new software patch.
* Another use of authentication is non-repudiation, in which a person filling out an electronic form cannot deny that they were the ones who did so.

Key Distribution
Key distribution with symmetric cryptography is a major problem, because all keys must be kept secret, and they obviously can't be transmitted over unsecured channels. One option is to send them out-of-band, say via paper or a confidential conversation.
* Another problem with symmetric keys is that a separate key must be maintained and used for each correspondent with whom one wishes to exchange confidential information.
* Asymmetric encryption solves some of these problems, because the public key can be freely transmitted through any channel, and the private key doesn't need to be transmitted anywhere. Recipients only need to maintain one private key for all incoming messages, though senders must maintain a separate public key for each recipient to which they might wish to send a message. Fortunately the public keys are not confidential, so this key-ring can be easily stored and managed.
* Unfortunately there is still some security concerns regarding the public keys used in asymmetric encryption. Consider for example the following man-in-the-middle attack involving phony public keys:
* One solution to the above problem involves digital certificates, which are public keys that have been digitally signed by a trusted third party. But wait a minute - How do we trust that third party, and how do we know they are really who they say they are? Certain certificate authorities have their public keys included within web browsers and other certificate consumers before they are distributed. These certificate authorities can then vouch for other trusted entities and so on in a web of trust, as explained more fully 

Implementation of Cryptography
* Network communications are implemented in multiple layers - Physical, Data Link, Network, Transport, and Application being the most common breakdown.
* Encryption and security can be implemented at any layer in the stack, with pros and cons to each choice:
• Because packets at lower levels contain the contents of higher layers, encryption 
at lower layers automatically encrypts higher layer information at the same time.
• However security and authorization may be important to higher levels independent of the underlying transport mechanism or route taken.
* At the network layer the most common standard is IPSec, a secure form of the IP layer, which is used to set up Virtual Private Networks, VPNs.
* At the transport layer the most common implementation is SSL, described below.
An Example: SSL
* SSL (Secure Sockets Layer) 3.0 was first developed by Netscape, and has now evolved into the industry-standard TLS protocol. It is used by web browsers to communicate securely with web servers, making it perhaps the most widely used security protocol on the Internet today.
* SSL is quite complex with many variations, only a simple case of which is shown here.
* The heart of SSL is session keys, which are used once for symmetric encryption and then discarded, requiring the generation of new keys for each new session. The big challenge is how to safely create such keys while avoiding man-in-the-middle and replay attacks.
* Prior to commencing the transaction, the server obtains a certificate from a certification authority, CA, containing:
• Server attributes such as unique and common names.
• Identity of the public encryption algorithm, E ( ), for the server.
• The public key, k_e for the server.
• The validity interval within which the certificate is valid.
• A digital signature on the above issued by the CA:
  a = S(K_CA )( ( attrs, E(k_e), interval )
* In addition, the client will have obtained a public verification algorithm, V (K_CA), for 
the certifying authority. Today's modern browsers include these built-in by the browser vendor for a number of trusted certificate authorities.
* The procedure for establishing secure communications is as follows:
1. The client, c, connects to the server, s, and sends a random 28-byte number, n_c.
2. The server replies with its own random value, n_s, along with its certificate of 
authority.
3. The client uses its verification algorithm to confirm the identity of the sender, and 
if all checks out, then the client generates a 46 byte random premaster secret, 
pms, and sends an encrypted version of it as cpms = E(k_s)(pms)
4. The server recovers pms as D (k_s) (cpms).
5. Now both the client and the server can compute a shared 48-byte master secret, 
ms, = f( pms, n_s, n_c )
6. Next, both client and server generate the following from ms:
• Symmetric encryption keys k_sc_crypt and k_cs_crypt for encrypting messages from the server to the client and vice-versa respectively.
• MAC generation keys k_sc_mac and k_cs_mac for generating authenticators on messages from server to client and client to server respectively.
7. To send a message to the server, the client sends:
• c = E(k_cs_crypt)(m, S(k_cs_mac) )( m ) )
8. Upon receiving c, the server recovers:
* (m,a) = D(k_cs_crypt)(c)
* and accepts it if V (k_sc_mac) (m, a) is true.
This approach enables both the server and client to verify the authenticity of every 
incoming message, and to ensure that outgoing messages are only readable by the process that originally participated in the key generation.SSL is the basis of many secure protocols, including Virtual Private Networks, VPNs, in which private data is distributed over the insecure public internet structure in an encrypted fashion that emulates a privately owned network.

Popular posts from this blog

FIRM

          A firm is an organisation which converts inputs into outputs and it sells. Input includes the factors of production (FOP). Such as land, labour, capital and organisation. The output of the firm consists of goods and services they produce.           The firm's are also classified into categories like private sector firms, public sector firms, joint sector firms and not for profit firms. Group of firms include Universities, public libraries, hospitals, museums, churches, voluntary organisations, labour unions, professional societies etc. Firm's Objectives:            The objectives of the firm includes the following 1. Profit Maximization:           The traditional theory of firms objective is to maximize the amount of shortrun profits. The public and business community define profit as an accounting concept, it is the difference between total receipts and total profit. 2. Firm's value Maximization:           Firm's are expected to operate for a long period, the

Human Factors in Designing User-Centric Engineering Solutions

Human factors play a pivotal role in the design and development of user-centric engineering solutions. The integration of human-centered design principles ensures that technology not only meets functional requirements but also aligns seamlessly with users' needs, abilities, and preferences. This approach recognizes the diversity among users and aims to create products and systems that are intuitive, efficient, and enjoyable to use. In this exploration, we will delve into the key aspects of human factors in designing user-centric engineering solutions, examining the importance of user research, usability, accessibility, and the overall user experience. User Research: Unveiling User Needs and Behaviors At the core of human-centered design lies comprehensive user research. Understanding the target audience is fundamental to creating solutions that resonate with users. This involves studying user needs, behaviors, and preferences through various methodologies such as surveys, interview

Introduction to C Programs

INTRODUCTION The programming language ‘C’ was developed by Dennis Ritchie in the early 1970s at Bell Laboratories. Although C was first developed for writing system software, today it has become such a famous language that a various of software programs are written using this language. The main advantage of using C for programming is that it can be easily used on different types of computers. Many other programming languages such as C++ and Java are also based on C which means that you will be able to learn them easily in the future. Today, C is mostly used with the UNIX operating system. Structure of a C program A C program contains one or more functions, where a function is defined as a group of statements that perform a well-defined task.The program defines the structure of a C program. The statements in a function are written in a logical series to perform a particular task. The most important function is the main() function and is a part of every C program. Rather, the execution o