Wednesday, 20 February 2019

Windows Domain, Windows Hosting, Hosting Learning, Web Hosting

With small businesses increasingly adopting cloud based file sharing technology (see Box, Dropbox for Business and OneDrive for Business to name but three) is there still a place for the Windows Domain and Active Directory?

What is a Windows Domain?

Not to be confused with internet domains, a Windows domain is a closed system of users and computers that can share resources and adhere to one centrally controlled management structure. Each user and machine belonging to that domain must authenticate with a domain controller in order to access the system. User accounts, machine accounts, security groups and many other settings are held in a central database called Active Directory.

Some of the benefits for small businesses

Group policy: One of the greatest advantages of Windows domain setup is the ability to use group policy to control all the settings of each workstation in granular detail. Wherever there is a setting in the Windows operating system, group policy allows for it to be set and enforced centrally. For example, a standard operating environment is achievable as an administrator can enforce a standard company brand, ensure shared resources like file shares and printers are automatically connected and that standard applications are automatically deployed to each machine. It is possible to prohibit end users from installing any software themselves or you can have a predefined list of approved software available for installation.

Roaming profiles: In a domain setup, users can login to any machine that is in the domain using their standard active directory credentials. However, ordinarily, the user experience is not consistent because none of your individual settings (think MS Office toolbars, macros, email signatures, printing defaults etc etc) persist as they are stored locally on your machine. With roaming profiles, you can log into any machine on the domain and find everything is as you left it.

Windows Update Services (WUS): Without a Windows domain, each PC has individual settings for patch management which creates security concerns and puts pressure on the internet connection. Using WUS it is possible to set a single update policy which all the machines will adhere to. Additionally, the patches and updates are cached on the domain controller so that they are not downloaded again and again from the public internet.

Password policies: An Active Directory account will conform to a central password policy. This allows the business to enforce password complexity and frequent changes across the whole team, something which greatly tightens security.

Office 365 Directory Sync: User accounts and passwords can be kept in sync with Microsoft cloud services such as Microsoft Office 365 allowing the user to operate with one set of credentials.

Volume Shadow Copy: If you’re using a Windows file server in a domain environment, it is possible for users to restore previous versions of files and folders on a self-service basis from their workstation.

3rd Party Software: Many third party packages especially security-related, will demand a Windows domain environment. For example, many business Antivirus products commonly require a domain in order to deploy, maintain and monitor the workstation installations.

Cloud Based: For small organisations, it is possible to get all the benefits of Active Directory without the need for a physical server humming away in the corner of your office. By using Azure it is possible to have the best of both worlds.

Monday, 18 February 2019

FTP and TFTP, Web Hosting, Hosting Learning, Hosting Reviews

FTP and TFTP both are the application layer protocols. Both are used to transfer a file from client to server or from the server to the client . But FTP is more complex than TFTP. There are many differences between FTP and TFTP, but the major difference between FTP and TFTP is that FTP establishes two connection for transferring a file between client and server that are TCP’s port 20 for data connection and TCP’s port 21 for the control connection. On the other hand, TFTP uses the only single connection on UDP’s port 69 to transfer a file between client and server.

Let’s study the other differences between FTP and TFTP with the help of comparison chart.


1. Comparison Chart
2. Definition
3. Key Differences

Comparison Chart

Abbreviation File Transfer Protocol. Trivial File Transfer Protocol.
Authentication  Authentication is required in FTP for communication between client and server.  No authentication is required in TFTP. 
Service  FTP uses TCP service which is a connection-oriented service.  TFTP uses UDP service which is connection-less service. 
Software  FTP software is larger than TFTP.  TFTP software is smaller than FTP and fits into readonly memory of the diskless workstation. 
Connection  FTP establishes two connections one for data(TCP port no. 21) and one for control(TCP port no. 20).  TFTP establishes a single connection for its file transfer (UDP port no. 69). 
Commands/Message  FTP have many commands.  TFTP have only five messages. 
Complexity  FTP is more complex  TFTP is less complex. 

Definition of  FTP

File Transfer Protocol (FTP) is an application layer protocol. FTP is a mechanism provided by TCP/IP for transferring the file from client to server or from the server to the client. The issues resolved by FTP are, like two systems between whom the file is being exchanged may use different file convention or different way to represent text and data, or a different directory structure. To resolve these issue FTP has a list of commands.

To copy a file from one host to another FTP establishes two connection one for data transfer and one for the control connection. FTP uses TCP service for transferring the file. TCP’s port 20 is used for data connection, and TCP’s port 21 is used for the control connection. The control connection remains opened till the entire interactive session remains and closed when the session ends. While the control connection is opened the data connection open and closed each time, the file is transferred.

FTP and TFTP, Web Hosting, Hosting Learning, Hosting Reviews

The control connection uses simple rules for the communication. On the other hand, the data connection is more complex as it uses different commands for the variety of data being transferred. Although the user is authenticated at the time of the connection establishment still, FTP is not secure as the password provided by the user is a plain text as well as the data is also transferred in plain text, which can be intercepted by the attacker. The solution to this is to add SSL(Secure Socket Layer).

Another way to transfer the file securely is to use an independent protocol sftp (secure file transfer protocol). The sftp is a part of SSH protocol.

Definition of TFTP

Trivial File Transfer Protocol (TFTP) is an application layer protocol. When we need to transfer a file from client to server or from the server to the client without the need for the feature of FTP, then TFTP is used. TFTP’s software package is smaller and it can fit into the read-only memory of diskless workstations which can be used during the bootstrap time. The software package of TFTP is smaller as compared to FTP and fit on the ROM easily because  it only requires IP and UDP. The sender always sends the data block of the fixed size that is 512 bytes and waits for the acknowledgement to receive before it sends the next block of data.

FTP and TFTP, Web Hosting, Hosting Learning, Hosting Reviews

There are five messages of TFTP that are RRQ, WRQ, DATA, ACK, ERROR. RRQ is a read request message which is used to establish the connection between client and server for reading data. WWQ is a write request message used to establish the connection between client and server for writing data. DATA is the message used by client or server to send the block of data. ACK is the acknowledgement message used by the client or the server to acknowledge the receipt of the received data block. ERROR is a message used by the client or the server when there is a problem in establishing the connection between client and server, or there is a problem in transferring the data.

Key Differences Between FTP and TFTP

1. The full form of FTP is File Transfer Protocol whereas, the full form of TFTP is Trivial File Transfer Protocol.

2. While communicating with FTP authentication is required during establishing connection. On the other hand, No authentication is required while communicating with TFTP.

3. FTP is a connection-oriented service whereas, the TFTP is a connection-less service.

4. The software of TFTP is smaller than FTP and it fits into read-only memory of diskless workstation.

5. In FTP connection is established using two connection first on port number 20 a control connection whereas second, on port number 21 for data connection. In TFTP a single connection is established at port number 69 for file transferring.

6. FTP has commands to perform actions whereas, in TFTP five messages are used to perform actions.

Sunday, 17 February 2019

DKIM(DomainKeys Identified Mail), Hosting Learning, Web Hosting

DKIM (DomainKeys Identified Mail) is a method to validate the authenticity of email messages. When each email is sent, it is signed using a private key and then validated on the receiving mail server (or ISP) using a public key that is in DNS. This process verifies that the message was not altered during transit.

Why should I have a DKIM record?

While DKIM isn't required, having emails that are signed with DKIM appear more legitimate to your recipients and are less likely to go to Junk or Spam folders. Like SPF, passing DKIM is required for "Domain-based Message Authentication, Reporting & Conformance" (DMARC), a newer standard to reduce email spoofing which builds on top of SPF and DKIM.

In addition to verifying the authenticity of an email message, DKIM also provides a way for ISPs to track and build a reputation on your domain's sending history. This is why we strongly encourage signing DKIM with your own domain, allowing you to build a reputation as opposed to using our sending domain. This reputation is portable and will help you control your reputation and sending practices across multiple sources.

How does DKIM work?

Similar to SPF, DKIM also uses DNS TXT records with a special format. When a private/public key pair is created, the public key is added to your domain's DNS: IN TXT "k=rsa\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDOCTHqIIQhGNISLchxDvv2X8NfkW7MEHGmtawoUgVUb8V1vXhGikCwYNqFR5swP6UCxCutX81B3+5SCDJ3rMYcu3tC/E9hd1phV+cjftSFLeJ+xe+3xwK+V18kM46kBPYvcZ/38USzMBa0XqDYw7LuMGmYf3gA/yJhaexYXa/PYwIDAQAB"

Wednesday, 30 January 2019

Web Hosting, Domains Name, Web Hosting Learning, Hosting Reviews

Keyword rich domain names play a crucial role in success of any website. When you select a domain you definitely want it to be exclusive and easy to recall, but once it is search engine friendly it becomes easier for others to find you on the web. Though there are several factors to consider that can lead to success of your website, the one that we will be discussing today is the importance of having a keyword rich domain. Keyword rich domain names are those names that are relevant to your business. Here are certain benefits of registering a keyword rich domain name.

Physical Targeting

If you plan to target your local market, then you will need your domain name to reflect this fact. Your local users will find it easy to locate you on your geographical identifier on search engines.

If you have a keyword rich domain you will find it easier to insert a number of modifiers that may be useful and relevant to your website. The keywords used in your domain name should appeal to the local targeted market, which can eventually help in generating good traffic.

Pay Per Click Ads

Keyword rich domains are very beneficial for pay for click ads and banner ads. Those who see an ad with a unique domain name that contains relevant terms are more prone to click on those ads and the link on them.

Keywords have always been a standard part of search engine optimization. It indicates to a potential client that your website is related to the particular niche that they looking for.

Targeted Customers

Domain Registration is compulsory and your domain name is a very important factor that directs quality traffic to your websites. You should not take for granted that keyword rich domains can just provide you short cuts to achieve high rankings on the search engines. A domain check on search engines can provide you with a variety of keyword rich domains.

Domain names that are rich in keywords are a part of an overall strategy, for optimization offsite and onsite. Apart from keyword rich domain names there are domain names that have earned fame as a brand such as YouTube, Tweeter and Facebook. They have innumerable loyal visitors and are considered to be community-based-websites. Businesses who wish to compete should always try to use keyword rich domain names.

Monday, 28 January 2019

Whether you’ve just been tasked with finding a new hosting platform for your company’s website, or you’ve just departed the corporate world to build a “better mousetrap”, you will be faced with many decisions on how to handle your website or applications. When researching hosting options, it doesn’t take long to identify dedicated servers and cloud servers as the two most popular options. The question then becomes, how do you decide which is better for your specific application or business model. In this article, we compare five key factors to consider when choosing a server.

Dedicated Servers, Cloud Servers, Web Hosting, Hosting Learning

Dedicated and Cloud Servers both perform similar basic functions. They receive requests for the information they store, process those requests, and then return that information back to the user. While seemingly straightforward, the differences in how these options handle basic functions can greatly affect implementation time, the users’ experience, and your bottom line.

Configuration Differences

As we’ve previously covered, Dedicated Servers and Cloud Servers are widely utilized by companies who need reliability and performance. Due to a more robust systems architecture, they can (usually) handle significantly more traffic, provide faster response times, and ensure greater application resiliency than shared or VPS hosting. This is achieved with the configuration of the physical server, or in the case of cloud, the underlying hypervisor. Let’s review these configuration differences:

A dedicated server is a self-contained physical unit that includes all of the necessary hardware for a business to host their product. As the name implies, this unit is “dedicated” to a single host allowing for maximum control and configurability. The processor, memory, and disk storage are chosen during initial set up. Additional memory and disks can be added to the configuration as long as there are available slots or bays.

Unlike dedicated servers, multiple cloud server environments are hosted on a physical machine. Cloud servers tend to allocate storage using a large SAN or other clustered filesystem, such as Ceph. The virtual machine data and hosted data are decentralized to accommodate hosting multiple cloud environments on the same physical server. This also provides for state migration in the event of failure. A hypervisor is installed on a separate server to handle the partitioning of different sized cloud servers (virtual machines). The hypervisor also manages the physical resources that are allotted to each cloud server such as RAM, storage space, and processor cores.

Five Dedicated / Cloud Server Comparison Areas

The configuration differences between dedicated servers and cloud servers are clear. Here are five categories where these differences become apparent.

1. Performance

Data Transfer Speed

Dedicated servers typically store and process data locally. Due to this relative proximity, when a request is made, there is very little delay in retrieving and processing information. This gives dedicated servers an edge when milliseconds and microseconds count – such as with heavy computing or high-frequency financial transactions.

Cloud servers, on the other hand, need to access data from the SAN. This requires that a request traverse the backend infrastructure to be processed. Once the data is returned, it still has to be routed by the hypervisor to the allotted processor before it can be handled. This extra trip back and forth to the SAN and the additional processing time, introduce latency that wouldn’t otherwise be evident.


Multiple cloud servers are typically housed on a physical server. As a result, processor cores need to be effectively managed to avoid performance degradation. This processor management is done by the hypervisor – an application built specifically to divide physical server resources among underlying cloud servers. Due to the way most hypervisors allocate resources, this can add another layer of latency to cloud hosting. Any request must be scheduled and placed into a queue to be executed.

Dedicated servers, by definition, have processors that are devoted to the application or website that is hosted on the server. They do not need to queue requests unless all processing power is being utilized. This allows the greatest level of flexibility and capability. Thus, many enterprise-level systems engineers choose dedicated servers for CPU intensive tasks, while utilizing cloud servers for other tasks.


Cloud servers provide advanced flexibility and scalability due to their decentralized data storage and shared nature. While sharing certain things works well, sharing a physical network interface puts a tenant at risk of bandwidth throttling. This throttling can occur when other tenants on the server are also utilizing the same network interface. Many hosting providers have the option for a dedicated network interface card (NIC) to be provisioned to a cloud server. This is recommended if you need to utilize the max available bandwidth. However, implementing NICs can be costly due to the complexity of implementation.

Dedicated servers are not at risk of throttling that is caused by a shared environment since their network interfaces are dedicated to the hosted application. Networking is also far simpler with dedicated servers and this introduces fewer points of failure.

2. Scalability


Cloud server storage expansion is virtually limitless, provided the vendor is using a recent hypervisor and operating system. Due to the off-host nature of the storage provided by the SAN, additional storage space can be provisioned without interacting with the cloud server. This means that cloud storage expansion will not usually incur downtime. Cloud servers offer clear benefits to high-profile or unproven products that may require massive and instant scalability.

Dedicated servers have limited storage capacity due to the physical number of drive bays or DAS arrays available on the server. Additional storage can be added only if there are open bays. Adding drives to open bays can generally be accomplished with a modern RAID controller, associated memory module / battery, and underlying LVM filesystem. However, additional DAS arrays are rarely hot-swappable and will require an outage in order to be added. This downtime can be avoided, but requires a significant amount of preparation, and will generally require maintaining multiple copies of critical application data in a multi-hub setup.


Cloud server customers are limited to the processor speed and cloud node type that their hosting provider offers. While additional cores can be provisioned to a cloud tenant, limitations may be experienced based on occupancy and resources allocated on the node. This can limit large-scale hosts within a cloud environment. However, if there are cores available on the server, they can be provisioned instantly.

Dedicated servers cannot change their processors without a maintenance window. If additional processing capabilities are needed, a site will either need to be migrated to a completely different server, or be networked with another dedicated server to help manage exponential platform growth.

3. Migration

Cloud server resources can be provisioned instantly and are limited only by the underlying host or node. However, large expansions will require scale-out planning that leverages multiple cloud servers or a migration to a dedicated or hybrid cloud architecture.

Dedicated server migrations have many of the same limitations. The downtime for both use-cases is a side effect of transferring the OS and data from the old physical server to the new.

Seamless migration is achievable in both instances; however, it requires a significant investment in both time and resource planning. When migrating, the new solution should consider both current and future growth, and provide an effective scalability plan. Both the old and new solutions will need to run concurrently until the “switch is flipped” and the new server(s) take over. Additionally, the old server(s) will need to be maintained as a backup for a short time to ensure that the new platform is performing within its operational expectations.

4. Systems Administration / Operational Differences

Cloud server planning and operation has considerably different implications than dedicated servers. While scalability is generally faster and has less impact on operations, it has a much lower ceiling of capability. Limitations of the cloud environment need to be analyzed and planned for. Cloud servers do allow you to focus on and take advantage of solutions automation (i.e. Docker, Kubernetes, Puppet, Chef, etc.) and optimize your server usage for cost and efficiency. Currently, solutions automation is much more difficult to accomplish with many “one size fits all” dedicated server providers that do not hone products to your needs.

5. Price

Both cloud servers and dedicated servers have different aspects that can make their cost profile vary widely. In our discussion on scalability, we mentioned that dedicated network interfaces for cloud servers can be a valuable, albeit expensive, option. Additionally, dedicated servers can be affixed with terabytes of memory, NVMe disks, 10/25/100GbE Network cards, and countless other hardware options that will increase the cost. Cloud servers will generally have a cost advantage at the lower end of the spectrum, but tend to lose their cost efficiency at scale. Meanwhile, a dedicated server will have a higher entry cost, but provide more reliable and cost-efficient scaling as your product grows.

Wednesday, 23 January 2019

When setting up your first website it’s not uncommon to get confused between domain name registration and web hosting.

Domain Name, Web Hosting, Web Hosting Reviews, Hosting Guides

Your domain name is the name of your site or your url ( and can be purchased by going to a domain name registrar. Domain names usually range from about $10 to $50/year depending on the extension. (.ca are more expensive than .com.)

In order for your website to appear on the Internet, the files need to be uploaded to a server. These can be “hosted” at a hosting company. Hosting is usually billed monthly or annually at a rate of $10 to $50/month depending on how the type of server you need and how much space and bandwidth you are using.

Domain names can be purchased anywhere, but it’s convenient to buy the domain and set up hosting in the same location.

When setting up WordPress sites, I recommend the following hosting providers:


All of these hosting companies provide domain name registration as well as hosting making it very convenient.

You can however, purchase a domain name and then host it somewhere else.

The following domain name registrars are quite popular:


Once you’ve purchased a domain name, you will be provided with login details to access your account. It’s important to keep this info on hand and pass it on to your web developer.

If you’ve purchased a domain name at one company and decide to host it somewhere else, then you’ll need to login to your domain registrar and modify the dns. This dns change just tells your domain registrar that the url you are using is now being hosted by someone else.

Before modifying the dns, be mindful of your email addresses. If you’ve set up email addresses with your domain registrar, you will need to set these up again with your hosting provider. Before making any changes, it’s best to contact your web developer or IT department and make sure that any changes won’t incur email loss.

This whole domain name registration and hosting may seem daunting but it’s very simple once you’ve done it once or twice. If you’re setting up a brand new site, choosing one of the hosting companies to purchase both domain name and hosting will make it simpler. If you’ve already purchased the domain name somewhere else, just make sure that you keep your login details to both domain registrar and hosting provider and pass this info to your web developer.

Monday, 21 January 2019

So, you’ve invested big money to get yourself dedicated server hosting. You’ve got the expertise, you’ve hired the right people, you’ve got the infrastructure for it? We’re here to tell you the many ways you can use your dedicated servers to its full potential:

Dedicated Servers, Web Hosting, Hosting Guides

1. Web server: This is the most common use of dedicated servers – got tons of traffic on your website? Dedicated servers are perfect for hosting such a site. If you’ve outgrown your shared servers, dedicated hosting is the next best alternative for you.

2. Backup server: Technical failure, human error or natural disasters can be threats to data security. Use your dedicated servers to back-up all your data so as to avoid the risk of losing essential data for good.

3. Database: Your dedicated server could double as a reliable, efficient data storage system for other applications and websites. Additionally, the dedicated server can be used for analyzing data & archiving.

4. Application Server: Where your Dedicated server is used for a single, large, specific application. These applications are generally business critical hence you can have multiple dedicated servers setup so as to ensure load distribution & splitting.

5. E-mail servers: E-mail servers allow you to manage e-mails and have complete control over the reception, the distribution & delivery. It’s best to have your e-mail at your own domain so you can manage the storage & upload limit. With your own server, you can send heavy files without a size cap.

6. Chat room servers: Get your own chat room set up, also known at the IRC server. Every chat room uses this server, allowing file transfers, encryption, proxy, chat bots etc.

7. VPN Servers: Virtual Private Network servers provide secure connections over public networks & channelizes all your traffic through these connections.

8. Nagios Server: This server is useful for monitoring if something goes wrong with your host server & lets you know the approximate time required to fix it. It monitors network issues, host resources & network probes like moisture & temperature.

9. Video Streaming: Your dedicated servers are built to handle heavy files & large traffic making it ideal for video streaming.

10. Music Jukebox: A web based jukebox allows you to play & organise your music files from any device you use. Use it to stream different high quality multimedia at various bandwidths. Access your music library from anywhere.

If you’ve got more ideas on how you can use your dedicated servers, we’d love to hear from you!



Popular Posts

Total Pageviews