Skip to main content

Enhancing Indoor Air Quality: A Guide to Better Health and Comfort

In today's world, where we spend a significant amount of our time indoors, the quality of the air we breathe inside our homes and workplaces is crucial for our health and well-being. Poor indoor air quality (IAQ) can lead to various health issues, including allergies, respiratory problems, and even long-term conditions. This blog post explores effective strategies for managing and improving indoor air quality. Understanding Indoor Air Pollutants Indoor air pollutants can originate from various sources: Biological Pollutants: Mold, dust mites, and pet dander. Chemical Pollutants: Volatile organic compounds (VOCs) from paints, cleaners, and furnishings. Particulate Matter: Dust, pollen, and smoke particles. Strategies for Improving Indoor Air Quality Ventilation: Natural Ventilation: Open windows and doors regularly to allow fresh air circulation. Mechanical Ventilation: Use exhaust fans in kitchens and bathrooms to remove pollutants directly at the source. Air Purifiers: HEPA Filt

Implementing Security Defenses

Implementing Security Defenses
Security Policy
* A security policy should be well thought-out, agreed upon, and restrained in a living 
document that everyone adheres to and is updated as required.
* Examples of contents adds how often port scans are run,password requirements, virus detectors, etc.

Vulnerability Assessment
* Periodically examine the system to detect vulnerabilities.
• Port scanning.
• Check for bad passwords.
• Look for suid programs.
• Unauthorized programs in system directories.
• Incorrect permission bits set.
• Program checksums / digital signatures which have modified.
• Unexpected or hidden network daemons.
• New entries in start-up scripts, shutdown scripts, cron tables, or other system scripts or configuration files.
• New unauthorized accounts.
* The government considers a system to be only as guard as its most far-reaching 
component. Any system linked to the Internet is inherently less secure than one that is in a sealed room with no external communications.
* Some administrators advocate "security through obscurity", focusing to keep as much data about their systems hidden as possible, and not announcing any security concerns they come across. Others announce security concerns from the rooftops, under the theory that the hackers are going to find out anyway, and the only one kept in the dark by obscurity are honest administrators who need to get the word.

Intrusion Detection
* Intrusion detection attempts to find attacks, both successful and unsuccessful attempts. Different techniques vary along several axes:
• The time that detection occurs, either at the time of the attack or after the fact.
• The types of information verified to find the attack(s). Some attacks can only be find by analyzing multiple sources of information.
• The reply to the attack, which may range from alerting an administrator to automatically stopping the attack (e.g. killing an offending process), to tracing back the attack in order to find the 
attacker.
  -> Another approach is to divert the attacker to a honey pot, on a honey net. The idea beyond a honey pot is a computer running normal services, but which no one uses to do any real work. Such a system should not see any network traffic under normal circumstances, so any traffic going to or from such a system is by definition suspicious. Honey pots are normally kept on a honey net protected by a reverse firewall, which will let potential attackers in to the honey pot, but will not allow any outgoing traffic. (So that if the honey pot is compromised, the attacker cannot take it as a base of operations for attacking other systems.) Honey pots are closely watched, and any suspicious activity carefully logged and investigated.
* Intrusion Detection Systems, IDSs, raise the alarm when they find an intrusion. 
Intrusion Detection and Prevention Systems, IDPs, act as filtering routers, shutting down doubtful traffic when it is find.
* There are two major methods to finding problems:
Signature-Based Detection scans network packets, system files, etc. 
looking for findable characteristics of known attacks, such as text strings for messages or the binary code for "exec /bin/sh". The problem with this is that it can only detect previously encountered problems for which the signature is known, requiring the frequent update of signature lists.
• Anomaly Detection searches for "unusual" structures of traffic or operation, such as unusually heavy load or an unusual number of logins late at night.
* The benefit of this method is that it can detect previously unknown attacks, so called zero-day attacks.
* One problem with this method is grouping what is "normal" for a given system. One method is to benchmark the system, but if the attacker is already present when the benchmarks are made, 
then the "unusual" activity is recorded as "the norm."
* Another problem is that not all changes in system execution are the result of security attacks. If the system is bogged down and really slow late on a Thursday night, does that mean that a hacker has gotten in and is having the system to send out SPAM, or does it simply mean that a CS 385 assignment is due on Friday? :-)
* To be effective, anomaly detectors must have a very low false alarm (false positive) rate, lest the warnings get deleted, as well as a low false negative rate in which attacks are missed.

Virus Protection
* Modern anti-virus programs are normally signature-based detection systems, which also have the ability (in some cases) of disinfecting the affected files and returning them back to their original condition.
* Both viruses and anti-virus programs are fastly evolving. For example viruses now 
commonly mutate every time they produce, and so anti-virus programs look for families of related signatures rather than specific ones.
* Some antivirus programs look for anomalies, such as an implementation program being opened for writing (other than by a compiler.)
* Avoiding bootleg, free, and divided software can help reduce the chance of catching a virus, but even shrink-wrapped official software has on occasion been infected by disgruntled factory workers.
* Some virus detectors will run doubtful programs in a sandbox, an isolated and secure area of the system which mimics the real system.
* Rich Text Format, RTF, files can't carry macros, and hence can't carry Word macro viruses.
* Known safe programs (e.g. right after a fresh install or after a thorough examination) can be digitally signed, and frequently the files can be re-verified against the stored digital signatures. (Which should be kept secure, such as on off-line write-only medium?)

Auditing, Accounting, and Logging
* Auditing, accounting, and logging records can also be used to find anomalous behavior.
* Some of the kinds of things that can be logged adds authentication failures and 
successes, logins, running of suid or sgid programs, network processes, system calls, etc. In rare cases almost every keystroke and electron that moves can be logged for future analysis. (Note that on the flip side, all this detailed logging can also be used to analyze system performance. The down side is that the logging also affects system performance 
(negatively!), and so a Heisenberg effect applies. )
* "The Cuckoo's Egg" tells the story of how Cliff Stoll find one of the early UNIX 
break ins when he noticed anomalies in the accounting records on a computer system being used by physics researchers.

Tripwire File system (New Sidebar)
* The tripwire file system monitors files and directories for changes, on the assumption that most intrusions eventually result in some sort of undesired or unexpected file changes.
* The two config file indicates what directories are to be monitored, as well as what properties of each file are to be recorded. (E.g. one may choose to monitor authorization and content changes, but not worry about read access times.)
* When first run, the choosed properties for all monitored files are recorded in a database. Hash codes are used to monitor file contents for changes.
* Subsequent runs report any changes to the recorded data, adding hash code changes, and any newly created or missing files in the monitored directories.
* For full security it is necessary to also guard the tripwire system itself, most importantly the database of recorded file properties. This could be saved on some external or write-only location, but that makes it harder to change the database when legitimate changes are made.
* It is hard to monitor files that are supposed to change, such as log files. The best tripwire can do in this case is to watch for anomalies, such as a log file that shrinks in size.
* Free and commercial versions are available at http://tripwire.org and http://tripwire.com.

Popular posts from this blog

FIRM

          A firm is an organisation which converts inputs into outputs and it sells. Input includes the factors of production (FOP). Such as land, labour, capital and organisation. The output of the firm consists of goods and services they produce.           The firm's are also classified into categories like private sector firms, public sector firms, joint sector firms and not for profit firms. Group of firms include Universities, public libraries, hospitals, museums, churches, voluntary organisations, labour unions, professional societies etc. Firm's Objectives:            The objectives of the firm includes the following 1. Profit Maximization:           The traditional theory of firms objective is to maximize the amount of shortrun profits. The public and business community define profit as an accounting concept, it is the difference between total receipts and total profit. 2. Firm's value Maximization:           Firm's are expected to operate for a long period, the

Human Factors in Designing User-Centric Engineering Solutions

Human factors play a pivotal role in the design and development of user-centric engineering solutions. The integration of human-centered design principles ensures that technology not only meets functional requirements but also aligns seamlessly with users' needs, abilities, and preferences. This approach recognizes the diversity among users and aims to create products and systems that are intuitive, efficient, and enjoyable to use. In this exploration, we will delve into the key aspects of human factors in designing user-centric engineering solutions, examining the importance of user research, usability, accessibility, and the overall user experience. User Research: Unveiling User Needs and Behaviors At the core of human-centered design lies comprehensive user research. Understanding the target audience is fundamental to creating solutions that resonate with users. This involves studying user needs, behaviors, and preferences through various methodologies such as surveys, interview

Introduction to C Programs

INTRODUCTION The programming language ‘C’ was developed by Dennis Ritchie in the early 1970s at Bell Laboratories. Although C was first developed for writing system software, today it has become such a famous language that a various of software programs are written using this language. The main advantage of using C for programming is that it can be easily used on different types of computers. Many other programming languages such as C++ and Java are also based on C which means that you will be able to learn them easily in the future. Today, C is mostly used with the UNIX operating system. Structure of a C program A C program contains one or more functions, where a function is defined as a group of statements that perform a well-defined task.The program defines the structure of a C program. The statements in a function are written in a logical series to perform a particular task. The most important function is the main() function and is a part of every C program. Rather, the execution o