Skip Levens, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/skip/ Cloud Storage & Cloud Backup Fri, 08 Jul 2022 21:03:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.backblaze.com/blog/wp-content/uploads/2019/04/cropped-cropped-backblaze_icon_transparent-80x80.png Skip Levens, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/skip/ 32 32 NAS 101: Setting Up and Configuring Your NAS https://www.backblaze.com/blog/nas-101-setting-up-and-configuring-your-nas/ https://www.backblaze.com/blog/nas-101-setting-up-and-configuring-your-nas/#respond Tue, 02 Mar 2021 17:35:37 +0000 https://www.backblaze.com/blog/?p=97572 Read this guide to learn how to configure your NAS using storage deployment best practices.

The post NAS 101: Setting Up and Configuring Your NAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

Upgrading to a network attached storage (NAS) system is a great decision for a growing business. They offer bigger storage capacity, a central place to organize your critical files and backups, easier multi-site collaboration, and better data protection than individual hard drives or workstations. But, configuring your NAS correctly can mean the difference between enjoying a functional storage system that will serve you well for years and spending what might feel like years on the phone with support.

After provisioning the right NAS for your needs (We have a guide for that, too.), you’ll want to get the most out of your investment. Let’s talk about the right way to configure your NAS using storage deployment best practices.

In this post, we’ll cover:

  1. Where to locate your NAS and how to optimize networking.
  2. How to set up your file structure and assign administrator and user access.
  3. How to configure NAS software and backup services.

Disclaimer: This advice will work for almost all NAS systems aside from the very large and complex systems typically installed in data center racks with custom network and power connections. For that, you’ve probably already advanced well beyond NAS 101.

➔ Download Our Complete NAS Guide

Setup Logistics: Where and How

Choosing a good location for your NAS and optimizing your network are critical first steps in ensuring the long-term health of your system and providing proper service to your users.

Where to Keep Your NAS

Consider the following criteria when choosing where in your physical space to put your NAS. A good home for your NAS should be:

    • Temperature Controlled: If you can’t locate your NAS in a specific, temperature-controlled room meant for servers and IT equipment, choose a place with good airflow that stays cool to protect your NAS from higher temperatures that can shorten component life.
    • Clean: Dust gathering around the fans of your NAS is a sign that dust could be entering the device’s internal systems. Dust is a leading cause of failure for both system cooling fans and power supply fans, which are typically found under grills at the back of the device. Make sure your NAS’s environment is as dust-free as possible, and inspect the area around the fans and the fans themselves periodically. If you notice dust buildup, wipe the surface dust with a static-free cloth and investigate air handling in the room. Air filters can help to minimize dust.
Dust-free fans are happy fans.
  • Stable: You’ll want to place your system on a flat, stable surface. Try to avoid placing your NAS in rooms that get a lot of traffic. Vibration tends to be rough on the hard drives within the NAS—they value their quiet time.
  • Secure: A locked room would be best for a physical asset like a NAS system, but if that’s not possible, try to find an area where visitors won’t have easy access.

Finally, your NAS needs a reliable, stable power supply to protect the storage volumes and data stored therein. Unexpected power loss can lead to loss or corruption of files being copied. A quality surge protector is a must. Better yet, invest in an uninterruptible power supply (UPS) device. If the power goes out, a UPS device will give you enough time to safely power down your NAS or find another power source. Check with your vendor for guidance on recommended UPS systems, and configure your NAS to take advantage of that feature.

How to Network Your NAS

Your NAS delivers all of its file and backup services to your users via your network, so optimizing that network is key to enhancing the system’s resilience and reliability. Here are a few considerations when setting up your network:

    • Cabling: Use good Ethernet cabling and network router connections. Often, intermittent connectivity or slow file serving issues can be traced back to faulty Ethernet cables or ports on aging switches.
    • IP Addresses: If your NAS has multiple network ports (e.g. two 1GigE Ethernet ports), you have a few options to get the most out of them. You can connect your NAS to different local networks without needing a router. For example, you could connect one port to the main internal network that your users share and a second port to your internet connected cameras or IoT devices—a simple way to make both networks accessible to your NAS. Another option is to set one port with a static or specific IP address and configure the second port to dynamically retrieve an IP address via DHCP to give you an additional way to access the system in case one link goes down. A third option, if it’s available on your NAS, is to link multiple network connections into a single connection. This feature (called 802.3AD Link Aggregation, or port bonding) gets more network performance than a single port can provide.

Wait. What is DHCP again?DHCP = Dynamic host configuration protocol. It automatically assigns an IP address from a pool of addresses, minimizing the human error in manual configuration and requires less network administration.

  • DNS: Your NAS relies on domain name servers—DNS—that the NAS system can query to help translate users’ web server requests to IP addresses, to provide its services. Most NAS systems will allow you to set two DNS entries for each port. You might already be running a DNS service locally (e.g. so that staging.yourcompany.local goes to the correct internal-only server), but it’s a good practice to provide a primary and secondary DNS server for the system to query. That way, if the first DNS server is unreachable, the second can still look up internet locations that applications running on your NAS will need. If one DNS entry is assigned by your local DHCP server or internet provider, set the second DNS entry to something like Cloudflare DNS (1.1.1.1 or 1.0.0.1) or Google DNS (8.8.8.8 or 8.8.4.4).
A typical network configuration interface. In this case, we’ve added Cloudflare DNS in addition to the DNS entry provided by the main internet gateway.

Access Management: Who and What

Deciding who has access to what is entirely unique to each organization, but there are some best practices that can make management easier. Here, we share some methods to help you plan for system longevity regardless of personnel changes.

Configuring Administrator Access

Who has the keys to the kingdom? What happens when that person moves departments or leaves the company? Planning ahead for these contingencies should be part of your NAS setup. We recommend two practices to help you prepare:

  1. Designate multiple trusted people as administrators. Your NAS system probably comes with a default admin name and password which you should, of course, change, but it’s beneficial to have at least one more administrator account. If one admin isn’t available, a backup admin can still log in. Additionally, using an organization-wide password manager like Bitwarden for your business is highly recommended.
  2. Use role-based emails for alerts. You’ll find many places in your NAS system configuration to enter an email address in case the system needs to send an alert—when power goes out or a disk has failed, for example. Instead of entering a single person’s email, use a role-based email instead. People change, but storageadmin@yourcompany.com will never leave you. Role-based emails are often implemented as a group email, allowing you to assign multiple people to the account and increasing the likelihood that someone will be available to respond to warnings.

Configuring User Access

With a NAS, you have the ability to easily manage how your users and groups access the shared storage needed for your teams to work effectively. Easy collaboration was probably one of the reasons you purchased a NAS in the first place. Building your folder system appropriately and configuring access by role or group helps you achieve that goal. Follow these steps when you first set up your NAS to streamline storage workflows:

    1. Define your folders. Your NAS might come pre-formatted with folders like “Photo,” “Video,” “Web,” etc. This structure makes sense when only one person is using the NAS. In a multi-user scenario, you’ll want to define the folders you’ll need, for example, by role or group membership, instead.
Example Folder Structure
Here is an example folder structure you could start with:

  • Local Backups: A folder for local backups, accessible only by backup software. This keeps your backup data separate from your shared storage.
  • Shared Storage: A folder for company-wide shared storage accessible to everyone.
  • Group Folders: Accounting, training, marketing, manufacturing, support, etc.
Creating a shared folder.
    1. Integrate with directory services. If you use a directory service like Active Directory or other LDAP services to manage users and privileges, you can integrate it with your NAS to assign access permissions. Integrating with directory services will let you use those tools to assign storage access instead of assigning permissions individually. Check your NAS user guide for instructions on how to integrate those services.
    2. Use a group- or role-based approach. If you don’t use an external user management service, we recommend setting up permissions based on groups or roles. A senior-level person might need access to every department’s folders, whereas a person in one department might only need access to a few folders. For example, for the accounting team’s access, you can create a folder for their files called “Accounting,” assign every user in accounting to the “Accounting” group, then grant folder access for that group rather than for each and every user. As people come and go, you can just add them to the appropriate group instead of configuring user access permissions for every new hire.
Applying group-level permissions to a shared folder. In this case, the permissions include the main folder open to all employees, the accounting folder, and the operations folder. Any user added to this user group will automatically inherit these default permissions.

The Last Step: NAS Software and Backup Management

Once you’ve found a suitable place for your NAS, connected it to your network, structured your folders, and configured access permissions, the final step is choosing what software will run on your NAS, including software to ensure your systems and your NAS itself are backed up. As you do so, keep the following in mind:

    • Prioritize the services you need. When prioritizing your services, adopt the principle of least privilege. For example, if a system has many services enabled by default, it makes sense to turn some of them off to minimize the system load and avoid exposing any services that are unnecessary. Then, when you are ready to enable a service, you can thoughtfully implement it for your users with good data and security practices, including applying the latest patches and updates. This keeps your NAS focused on its most important services—for example, file system service—first so that it runs efficiently and optimizes resources. Depending on your business, this might look like turning off video-serving applications or photo servers and turning on things like SMB for file service for Mac, Windows, and Linux; SSH if you’re accessing the system via command line; and services for backup and sync.
Enabling priority file services—in this case, SMB service for Mac and Windows users.
Setting a NAS device to accept Time Machine backups from local Mac systems.

Common Services for Your NAS

  • SMB: The most common storage access and browsing protocol to “talk” to modern OS clients. It allows these systems to browse available systems, authenticate to them, and send and retrieve files.
  • AFP: An older protocol that serves files for older Mac clients that do not work well with SMB.
  • NFS: A distributed file system protocol used primarily for UNIX and Linux systems.
  • FTP and SFTP: File serving protocols for multiple, simultaneous users, common for large directories of files that users will need occasional access to, like training or support documents. SFTP is more secure and highly preferred over FTP. You will likely find that it’s easier to create and manage a folder on your NAS with read-only access instead.
  • rsync: A file protocol for backups, allowing systems to easily connect to and back up their systems using the rsync file transfer and sync utility. If your local servers or systems back up to your NAS via rsync, this service will need to be enabled on the NAS.

The Final, Final Step: Enjoy All the Benefits Your NAS Offers

If you’ve followed our NAS 101 series, you now have a system sized for your important data and growing business that’s configured to run at its best. To summarize, here are the major takeaways to remember when setting up your NAS:

  • Keep your NAS in a cool, safe, clean location.
  • Optimize your network to ensure reliability and maximize performance.
  • Plan for ease of use and longevity when it comes to folder structure and access management.
  • Prioritize the software and services you need when first configuring your NAS.
  • Make sure your systems are backed up to your NAS, and your NAS is backed up to an off-site location.

Have you recently set up a NAS in your office or home office? Let us know about your experience in the comments.

The post NAS 101: Setting Up and Configuring Your NAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/nas-101-setting-up-and-configuring-your-nas/feed/ 0
NAS Collaboration Guide: How to Configure Shared Storage Between Locations https://www.backblaze.com/blog/nas-collaboration-guide-how-to-configure-shared-storage-between-locations/ https://www.backblaze.com/blog/nas-collaboration-guide-how-to-configure-shared-storage-between-locations/#respond Thu, 25 Feb 2021 16:48:37 +0000 https://www.backblaze.com/blog/?p=97504 Learn how to implement cloud sync for multi-office collaboration on NAS devices and how to protect your NAS data with cloud backup.

The post NAS Collaboration Guide: How to Configure Shared Storage Between Locations appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

When you’re growing a business, every milestone often pairs exciting opportunities with serious challenges. Gavin Wade, Founder & CEO of Cloudspot, put it best: “In any startup environment, there are fires all over the place. You touch the door handle. If it’s not too hot, you let it burn, and you go take care of the door that has smoke pouring out.”

Expanding your business to new locations or managing a remote team has the potential to become a five-alarm fire, and fast—particularly from a data management perspective. Your team needs simple, shared storage and fail-safe data backups, and all in a cost-effective package.

Installing multiple NAS devices across locations and syncing with the cloud provides all three, and it’s easier than it sounds. Even if you’re not ready to expand just yet, upgrading from swapping hard drives or using a sync service like G Suite or Dropbox to a NAS system will provide a scalable approach to future growth.

This guide explains:

  1. Why NAS devices make sense for growing businesses.
  2. How to implement cloud sync for streamlined collaboration in four steps.
  3. How to protect data on your NAS devices with cloud backup.

➔ Download Our Complete NAS Guide

NAS = An Upgrade for Your Business

How do you handle data sharing and workflow between locations? Maybe you rely on ferrying external hard drives between offices, and you’re frustrated by the hassle and potential for human error. Maybe you use G Suite, and their new 2TB caps are killing your bottom line. Maybe you already use a NAS device, but you need to add another one and you’re not sure how to sync them.

Making collaboration easy and protecting your data in the process are likely essential goals for your business, and an ad hoc solution can only go so far. What worked when you started might not work for the long term if you want to achieve sustainable growth. Investing in a NAS device or multiple devices provides a few key advantages, including:

  • More storage. First and foremost, NAS provides more storage space than individual hard drives or individual workstations because NAS systems create a single storage volume from several drives (often arranged in a RAID scheme).
  • Faster storage. NAS works as fast as your local office network speed; you won’t need to wait on internet bandwidth or track down the right drive for restores.
  • Enhanced collaboration. As opposed to individual hard drives, multiple people can access a NAS device at the same time. You can also sync multiple drives easily, as we’ll detail below.
  • Better protection and security. Because the drives in a NAS system are configured in a RAID, the data stored on the drives is protected from individual drive failures. And drives do fail. A NAS device can also serve as a central place to hold backups of laptops, workstations, and servers. You can quickly recover those systems if they go down, and the backups can serve as part of an effective ransomware defense strategy.
  • Cost-efficiency. Compared to individual hard drives, NAS devices are a bigger upfront investment. But the benefits of more efficient workflows plus the protection from data loss and expensive recoveries make the investment well worth considering for growing businesses.

Hold up. What’s a RAID again?

RAID stands for “redundant array of independent disks.” It combines multiple hard drives into one or more storage volumes and distributes data across the drives to allow for data recovery in the event of one or multiple drive failures, depending on configuration.

The Next Step: Pairing NAS + Cloud

Most NAS devices include software to achieve cloud backups and cloud sync baked in. For our purposes, we’ll look specifically at the benefits of enabling cloud solutions on a QNAP NAS system to facilitate collaboration between offices and implement a 3-2-1 backup strategy.

NAS + Cloud + Sync = Collaboration

Pairing NAS systems with cloud storage enables you to sync files between multiple NAS devices, boosting collaboration between offices or remote teams. Each location has access to the same, commonly used, up-to-date documents or assets, and you no longer need an external service to share large files—just place them in shared folders on your local NAS and they appear on synced devices in minutes.

If this seems complex or maybe you haven’t even considered using cloud sync between offices, here’s a four-step process to configure sync on QNAP NAS devices and cloud storage:

    1. Prepare your cloud storage to serve as your content sync interchange. Create a folder in your cloud storage, separate from your backup folders, to serve as the interchange between the NAS systems in each office. Each of your NAS systems will stay synchronized with this cloud destination.
Step 1: Create cloud sync destination.
    1. Determine the content you want to make available across all of your offices. For example, it may be helpful to have a large main folder for the entire company, and folders within that organized by department. Then, use QNAP Sync to copy the contents of that folder to a new folder or bucket location in the cloud.
Step 2: Copy first source to cloud.
    1. Copy the content from the cloud location to your second NAS. You can speed this up by first syncing the data on your new office’s NAS on your local network, then physically moving it to the new location. Now, you have the same content on both NAS systems. If bringing your new NAS on-site isn’t possible due to geography or access issues, then copy the cloud folders you created in step two down to the second system over internet bandwidth.
Step 3: Copy cloud to second location.
    1. Set up two-way syncs between each NAS and the cloud. Now that you have the same shared files on both NAS systems and the cloud, the last step is to enable two-way sync from each location. Your QNAP NAS will move changed files up or down continuously, ensuring everyone is working on the most up-to-date files.
Step 4: Keep both locations synchronized via cloud.

With both NAS devices synchronized via the cloud, all offices have access to common folders and files can be shared instantaneously. When someone in one office wants to collaborate on a large file with someone in the other office, they simply move the file into their local all-office shared folder, and it will appear in that folder in the other office within minutes.

NAS + Cloud Storage = Data Security

An additional benefit of combining a NAS with cloud storage for backup is that it completes a solid 3-2-1 backup strategy, which provides for three copies of your data—two on different media on-site, with one off-site. The cloud provides the off-site part of this equation. Here’s an example of how you’d accomplish this with a QNAP NAS in each office and simple cloud backup:

  1. Make sure that the systems in each office back up to that office’s QNAP NAS. You can use NetBak Replicator for Windows systems or Time Machine for Macs to accomplish this.
  2. Back up the NAS itself to cloud storage. Here’s a step-by-step guide on how to do this with Hyper Backup 3 to Backblaze B2 Cloud Storage, which is already integrated with NAS systems from QNAP.

With backup in place, if any of those office systems fail, you can restore them directly from your NAS, and your NAS itself is backed up to the cloud if some catastrophic event were to affect all of your in-office devices.

Adding Up the Benefits of NAS + Cloud

To recap, here are a few takeaways to consider when managing data for a growing business:

  • NAS systems give you more storage on fast, local networks; better data protection than hard drives; and the ability to easily sync should you add locations or remote team members.
  • Connecting your NAS to cloud storage means every system in every office or location is backed up and protected, both locally and in the cloud.
  • Syncing NAS devices with the cloud gives all of your offices access to consistent, shared files on fast, local networks.
  • You no longer need to use outside services to share large files between offices.
  • You can configure backups and sync between multiple devices using software that comes baked in with a QNAP NAS system or augment with any of our Backblaze B2 integrations.

If you’re sick of putting out fires related to ad hoc collaboration solutions or just looking to upgrade from hard drives or G Suite, combining NAS systems with cloud storage delivers performance, protection, and easy collaboration between remote teams or offices.

Thinking about upgrading to a NAS device, but not sure where to start? Check out our NAS 101: Buyer’s Guide for guidance on navigating your choices. Already using NAS, but have questions about syncing? Let us know in the comments.

The post NAS Collaboration Guide: How to Configure Shared Storage Between Locations appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/nas-collaboration-guide-how-to-configure-shared-storage-between-locations/feed/ 0
NAS 101: A Buyer’s Guide to the Features and Capacity You Need https://www.backblaze.com/blog/nas-101-a-buyers-guide-to-the-features-and-capacity-you-need/ https://www.backblaze.com/blog/nas-101-a-buyers-guide-to-the-features-and-capacity-you-need/#comments Fri, 29 Jan 2021 16:57:24 +0000 https://www.backblaze.com/blog/?p=97277 Read this guide to learn about buying the right NAS system for your growing business, including information on pairing it with cloud storage to ease collaboration and growth.

The post NAS 101: A Buyer’s Guide to the Features and Capacity You Need appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
NAS components

As your business grows, the amount of data that it needs to store and manage also grows. Storing this data on loose hard drives and individual workstations will no longer cut it: Your team needs ready data access, protection from loss, and capacity for future growth. The easiest way to provide all three quickly and easily is network attached storage (NAS).

You might have already considered buying a NAS device, or you purchased one that you’ve already grown out of, or this could be your first time looking at your options. No matter where you’re starting, the number of choices and features NAS systems offer today are overwhelming, especially when you’re trying to buy something that will work now and in the future.

This post aims to make your process a little easier. The following content will help you:

  • Review the benefits of a NAS system.
  • Navigate the options you’ll need to choose from.
  • Understand the reason to pair your NAS with cloud storage.

➔ Download Our Complete NAS Guide

How Can NAS Benefit Your Business?

There are multiple benefits that a NAS system can provide to users on your network, but we’ll recap a few of the key advantages here.

  • More Storage. It’s a tad obvious, but the primary benefit of a NAS system is that it will provide a significant addition to your storage capacity if you’re relying on workstations and hard drives. NAS systems create a single storage volume from several drives (often arranged in a RAID scheme).
  • Protection From Data Loss. Less obvious, but equally important, the RAID configuration in a NAS system ensures that the data you store can survive the failure of one or more of its hard drives. Hard drives fail! NAS helps to make that statement of fact less scary.
  • Security and Speed. Beyond protection from drive failure, NAS also provides security for your data from outside actors as it is only accessible on your local office network and to user accounts which you can control. Not only that, but it generally works as fast as your local office network speeds.
  • Better Data Management Tools. Fully automated backups, deduplication, compression, and encryption are just a handful of the functions you can put to work on a NAS system—all of which make your data storage more efficient and secure. You can also configure sync workflows to ease collaboration for your team, enable services to manage your users and groups with directory services, and even add services like photo or media management.

If this all sounds useful for your business, read on to learn more about bringing these benefits in-house.

NAS Buyer's Guide

The Network Attached Storage (NAS) Buyer’s Guide

How do you evaluate the differences between different NAS vendors? Or even within a single company’s product line? We’re here to help. This tour of the major components of a NAS system will help you to develop a tick list for the sizing and features of a system that will fit your needs.

Choosing a NAS: The Components

How your NAS performs is dictated by the components that make up the system, and capability of future upgrades. Let’s walk through the different options.

NAS Storage Capacity: How Many Bays Do You Need?

One of the first ways to distinguish between different NAS systems is the number of drive bays a given system offers, as this determines how many disks the system can hold. Generally speaking, the larger the number of drive bays, the more storage you can provide to your users and the more flexibility you have around protecting your data from disk failure.

In a NAS system, storage is defined by the number of drives, the shared volume they create, and their striping scheme (e.g. RAID 0, 1, 5, 6, etc.). For example, one drive gives no additional performance or protection. Two drives allows the option of simple mirroring. Mirroring is also referred to as RAID 1, when one volume is built from two drives, allowing for the failure of one of those drives without data loss. Two drives also allows for striping—referred to as RAID 0—when one volume is “stretched” across two drives, making a single, larger drive that also gives some performance improvement, but increases risk because the loss of one drive means that the entire volume will be unavailable.

Refresher: How Does RAID Work Again?
A redundant array of independent disks, or RAID, combines multiple hard drives into one or more storage volumes. RAID distributes data and parity (drive recovery information) across the drives in different ways, and each layout provides different degrees of data protection.

Three drives is the minimum for RAID 5, which can survive the loss of one drive, though four drives is a more common NAS system configuration. Five drives allow for RAID 6, which can survive the loss of two drives. Six to eight drives are very common NAS configurations that allow more storage, space, performance, and even drive sparing—the ability to designate a stand-by drive to immediately rebuild a failed drive.

Many believe that, if you’re in the market for a NAS system with multiple bays, you should opt for capacity that allows for RAID 6 if possible. RAID 6 can survive the loss of two drives, and delivers performance nearly equal to RAID 5 with better protection.

It’s understandable to think: Why do I need to prepare in case two drives fail? Well, when a drive fails and you replace it with a fresh drive, the rebuilding process to restore that drive’s data and parity information can take a long time. Though it’s rare, it’s possible to have another drive fail during the rebuilding process. In that scenario, if you have RAID 6 you’re likely going to be okay. If you have RAID 5, you may have just lost data.

Buyer’s Note: Some systems are sold without drives. Should you buy NAS with or without drives? That decision usually boils down to the size and type of drives you’d like to have.

When buying a NAS system with drives provided:

  • The drives are usually covered by the manufacturer’s warranty as part of the complete system.
  • The drives are typically bought directly from the manufacturer’s supply chain and shipped directly from the hard drive manufacturer.

If you choose to buy drives separately from your NAS:

  • The drives may be a mix of drive production runs, and have been in the supply chain longer. Match the drive capacities and models for the most predictable performance across the RAID volume.
  • Choose drives rated for NAS systems—NAS vendors publish lists of supported drive types. Here’s a list from QNAP, for example.
  • Check the warranty and return procedures, and if you are moving a collection of older drives into your NAS, you may also consider how much of the warranty has already run out.

Buyer Takeaway: Choose a system that can support RAID 5 or RAID 6 to allow a combination of more storage space, performance, and drive failure protection. But be sure to check whether the NAS system is sold with or without drives.

Selecting Drive Capacity for the NAS: What Size of Drives Should You Buy?

You can quickly estimate how much storage you’ll need by adding up the hard drives and external drives of all the systems you’ll be backing up in your office, adding the amount of shared storage you’ll want to provide to your users, and factor in any growing demand you project for shared storage.

If you have any historical data under management from previous years, you can calculate a simple growth rate. But, include a buffer as data growth accelerates every year. Generally speaking, price out systems at two or four times the size of your existing data capacity. Let’s say that your hard drives and external drives to back up, and any additional shared storage you’d like to provide your users, add up to 20TB. Double that size to get 40TB to account for growth, then divide by a common hard drive size such as 10TB. With that in mind, you can start shopping for four bay systems and larger.

Formula 1: ((Number of NAS Users x Hard Drive Size ) + Shared Storage) * Growth Factor = NAS Storage Needed

Example: There are six users in an office that will each be backing up their 2TB workstations and laptops. The team will want to use another 6TB of shared storage for documents, images, and videos for everyone to use. Multiplied times a growth factor of two, you’d start shopping for NAS systems that offer at least 36TB of storage.

((Six users * 2TB each) + 6TB shared storage ) * growth factor of two = 36TB

Formula 2: ((NAS Storage Needed / Hard Drive Size) + Two Parity Drives) = Drive Bays Needed

Example: Continuing the example above, when looking for a new NAS system using 12TB drives, accounting for two additional drives for RAID 6, you’d look for NAS systems that can support five or more drive bays of 12TB hard drives.

(( 36TB / 12TB ) + two additional drives ) = Five drive bays and up

If your budget allows, opting for larger drives and more drive bays will give you more storage overhead that you’ll surely grow into over time. Factor in, however, that if you go too big, you’re paying for unused storage space for a longer period of time. And if you use GAAP accounting, you’ll need to capitalize that investment over the same time window as a smaller NAS system which will hit your bottom line on an annual basis. This is the classic CapEx vs. Opex dilemma you can learn more about here.

If your cash budget is tight you can always purchase a NAS system with more bays but smaller drives, which will significantly reduce your upfront pricing. You can then replace those drives in the future with larger ones when you need them. Hard drive prices generally fall over time, so they will likely be less expensive in the future. You’ll end up purchasing two sets of drives over time, which will be less cash-intensive at the outset, but likely more expensive in the long run.

Similarly, you can partially fill the drive bays. If you want to get an eight bay system, but only have the budget for six drives, just add the other drives later. One of the best parts of NAS systems is the flexibility they allow you for right-sizing your shared storage approach.

Diagram of all the components of a NAS Device

Buyer Takeaway: Estimate how much storage you’ll need, add the amount of shared storage you’ll want to provide to your users, and factor in growing demand for shared storage—then balance long term growth potential against cash flow.

Processor, Controllers, and Memory: What Performance Levels Do You Require?

Is it better to have big onboard processors or controllers? Smaller, embedded chips common in smaller NAS systems provide basic functionality, but might bog down when serving many users or crunching through deduplication and encryption tasks, which are options with many backup solutions. Larger NAS systems typically stored in IT data center racks usually offer multiple storage controllers that can deliver the fastest performance and even failover capability.

  • Processor: Provides compute power for the system operation, services, and applications.
  • Controller: Manages the storage volume presentation and health.
  • Memory: Improves speed of applications and file serving performance.

ARM and Intel Atom chips are good for basic systems, while larger and more capable processors such as the Intel Corei3 and Corei5 are faster at NAS tasks like encryption, deduplication, and serving any on-board apps. Xeon server class chips can be found in many rack-mounted systems, too.

So if you’re just looking for basic storage expansion, the entry-level systems with more modest, basic chips will likely suit you just fine. If deduplication, encryption, sync, and other functions many NAS systems offer as optional tools are part of your future workflow, this is one area where you shouldn’t cut corners.

installing NAS memory cards
Adding memory modules to your NAS can be a simple performance upgrade.

If you have the option to expand the system memory, this can be an easy performance upgrade. Generally, the higher the ratio of memory to drives will benefit the performance of reading and writing to disk and the speed of on-board applications.

Buyer Takeaway: Entry-level NAS systems provide good basic functionality, but you should ensure your components are up to the challenge if you plan to make heavy use of deduplication, encryption, compression, and other functions.

Network and Connections: What Capacity for Speed Do You Need?

A basic NAS will have a Gigabit Ethernet connection, which you will often find listed as 1GigE. This throughput of 1 Gb/s in network speeds is equivalent to 125 MB/s coming from your storage system. That means that the NAS system must fit storage service to all users within that limitation, which is usually not an issue when serving only a few users. Many systems offer expansion ports inside, allowing you to purchase a 10GigE network card later to upgrade your NAS.

Synology Ethernet network connection
An example of a small 10GigE add-in card that can boost your NAS network performance.

Some NAS vendors offer 2.5 Gb/s, or 5 Gb/s connections on their systems—these will give you more performance than 1GigE connections, but usually require that you get a compatible network switch, and possibly, USB adapters or expansion cards for every system that will connect to that NAS via the switch. If your office is already wired for 10GigE, make sure your NAS is also 10GigE. Otherwise, the more network ports in the back of the system, the better. If you aren’t ready to get a 10GigE capable system now, but you think you might be in the future, select a system that has expansion capability.

Multi-ports on a QNAP NAS
Some NAS systems offer not only multiple network ports, but faster connections as well, such as Thunderbolt™.

Some systems provide another option of Thunderbolt connections in addition to Ethernet connections. These allow laptops and workstations with Thunderbolt ports to directly connect to the NAS and offer much higher bandwidth—up to 40GigE (5 GB/s)—and are good for systems that need to edit large files directly on the NAS, such as is often the case in video editing. If you’ll be directly connecting systems that need the fastest possible speeds, select a system with Thunderbolt ports, one per Thunderbolt-connected user.

Buyer Takeaway: It’s best to have more network ports in the back of your system. Or, select a system with network expansion card capability.

Caching and Hybrid Drive Features: How Fast Do You Need to Serve Files?

Many of the higher-end NAS systems can complement standard 5.25” hard drives with higher performing, smaller form factor SSD or M.2 drives. These smaller, faster drives can dramatically improve the NAS file serving performance by caching files in most recent, or most frequently requested files. By combining these different types of drives, the NAS can deliver both improved file serving performance, and large capacity.

As the number of users you support in each office grows, these capabilities will become more important as a relatively simple way to boost performance. Like we mentioned earlier, you can purchase a system with these slots unpopulated and add them in later.

Buyer Takeaway: Combine different types of drives, like smaller form factor SSD or M.2 storage with 5.25” hard drives to gain improved file serving performance.

Operating System: What Kind of Management Features Do You Require?

NAS OS dashboard

The NAS operating systems of the major vendors generally provide the same services in an OS-like interface delivered via an on-board web server. By simply typing in your NAS’s IP address, you can sign in and manage your system’s settings, create and manage the storage volumes, set up groups of users on your network who have access, configure and monitor backup and sync tasks, and more.

If there are specific user management features in your IT environment that you need, or want to test how the NAS OS works, you can test them by spinning up a demonstration virtual machine offered by some NAS vendors. You can test service configuration and get a feel for the interface and tools, but obviously as a virtual environment you won’t be able to manage hardware directly. Here are some options:

Buyer Takeaway: The on-board NAS OS looks similar to a Mac or PC operating system to make it easy to navigate system setup and maintenance and allows you to manage settings, storage, and tasks.

Solutions: What Added Services Do You Require?

While the onboard processor and memory on your NAS are primarily for file service, backup, and sync tasks, you can also install other solutions directly onto it. For instance, QNAP and Synology—two popular NAS providers—have app stores accessible from their management software where you can select applications to download and install on your NAS. You might be interested in a backup and sync solution such as Archiware, or CMS solutions like Joomla or WordPress.

App Center for NAS applications
Applications available to install directly within some NAS vendors’ management system.

However, beyond backup solutions, you’d benefit from installing mission-critical apps onto a dedicated system rather than on your NAS. For a small number of users, running applications directly on the NAS can be a good temporary use or a pathway to testing something out. But if the application becomes very busy, it could impact the other services of the NAS. Big picture, native apps on your NAS can be useful, but don’t overdo it.

Buyer Takeaway: The main backup and sync apps from the major NAS vendors are excellent—give them a good test drive, but know that there are many excellent backup and sync solutions available as well.

Why Adding Cloud Storage to Your NAS Offers Additional Benefits

When you pair cloud storage with your NAS, you gain access to features that complement the security of your data and your ability to share files both locally and remotely.

To start with, cloud storage provides off-site backup protection. This aligns your NAS setup with the industry standard for data protection: a 3-2-1 backup strategy—which ensures that you have three copies of your data, the source data and two backups—one of which is on your NAS, and the second copy of your data is protected off-site. And in the event of data loss, you can restore your systems directly from the cloud even if all the systems in your office are knocked out or destroyed.

While data sent to the cloud is encrypted in-flight via SSL, you can also encrypt your backups so that they are only openable with your team’s encryption key. The cloud can also give you advanced storage options for your backup files like Write Once, Read Many (WORM) or immutability—making your data unchangeable for a defined period of time—or set custom data lifecycle rules at the bucket level to help match your ideal backup workflow.

Additionally, cloud storage provides valuable access to your data and documents from your NAS through sync capabilities. In case anyone on your team needs to access a file when they are away from the office, or as is more common now, in case your entire team is working from home, they’ll be able to access the files that have been synced to the cloud through your NAS’s secure sync program. You can even sync across multiple locations using the cloud as a two-way sync to quickly replicate data across locations. For employees collaborating across great distances, this helps to ensure they’re not waiting on the internet to deliver critical files: They’re already on-site.

Refresher: What’s the Difference Between Cloud Sync, Cloud Backup, and Cloud Storage? Sync services allow multiple users across multiple devices to access the same file. Backup stores a copy of those files somewhere remote from your work environment, oftentimes in an off-site server—like cloud storage. It’s important to know that a “sync” is not a backup, but they can work well together when properly coordinated. You can read more about the differences in this blog post.

Ready to Set Up Your NAS With Cloud Storage

To summarize, here are a few things to remember when shopping for a NAS system:

  • Consider how much storage you’ll need for both local backup and for shared user storage.
  • Look for a system with three to five drive bays at minimum.
  • Check that the NAS system is sold with drives—if not, you’ll have to source enough of the same size drives.
  • Opt for a system that lets you upgrade the memory and network options.
  • Choose a system that meets your needs today; you can always upgrade in the future.

Coupled with cloud storage like Backblaze B2 Cloud Storage, which is already integrated with NAS systems from Synology and QNAP, you gain necessary backup protection and restoration from the cloud, as well as the capability to sync across locations.

Have more questions about NAS features or how to implement a NAS system in your environment? Ask away in the comments.

The post NAS 101: A Buyer’s Guide to the Features and Capacity You Need appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/nas-101-a-buyers-guide-to-the-features-and-capacity-you-need/feed/ 9
Development Roadmap: Power Up Apps With Go Programming Language and Cloud Storage https://www.backblaze.com/blog/development-roadmap-power-up-apps-with-go-programming-language-and-cloud-storage/ https://www.backblaze.com/blog/development-roadmap-power-up-apps-with-go-programming-language-and-cloud-storage/#respond Tue, 15 Dec 2020 16:57:45 +0000 https://www.backblaze.com/blog/?p=97078 Learn more about using Go in your development environment with this primer for connecting an app to cloud storage.

The post Development Roadmap: Power Up Apps With Go Programming Language and Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

If you build apps, you’ve probably considered working in Go. After all, the open-source language has become more popular with developers every year since its introduction. With a reputation for simplicity in meeting modern programming needs, it’s no surprise that GitHub lists it as the 10th most popular coding language out there. Docker, Kubernetes, rclone—all developed in Go.

If you’re not using Go, this post will suggest a few reasons you might give it a shot in your next application, with a specific focus on another reason for its popularity: its ease of use in connecting to cloud storage—an increasingly important requirement as data storage and delivery becomes central to wide swaths of app development. With this in mind, the following content will also outline some basic and relatively straightforward steps to follow for building an app in Go and connecting it to cloud storage.

But first, if you’re not at all familiar with this programming language, here’s a little more background to get you started.

What Is Go?

Go (sometimes referred to as Golang) is a modern coding language that can perform as well as low-level languages like C, yet is simpler to program and takes full advantage of modern processors. Similar to Python, it can meet many common programming needs and is extensible with a growing number of libraries. However, these advantages don’t mean it’s necessarily slower—in fact, applications written in Go compile to a binary that runs nearly as fast as programs written in C. It’s also designed to take advantage of multiple cores and concurrency routines, compiles to machine code, and is generally regarded as being faster than Java.

Why Use Go With Cloud Storage?

No matter how fast or efficient your app is, how it interacts with storage is crucial. Every app needs to store content on some level. And even if you keep some of the data your app needs closer to your CPU operations, or on other storage temporarily, it still benefits you to use economical, active storage.

Here are a few of the primary reasons why:

  • Massive amounts of user data. If your application allows users to upload data or documents, your eventual success will mean that storage requirements for the app will grow exponentially.
  • Application data. If your app generates data as a part of its operation, such as log files, or needs to store both large data sets and the results of compute runs on that data, connecting directly to cloud storage helps you to manage that flow over the long run.
  • Large data sets. Any app that needs to make sense of giant pools of unstructured data, like an app utilizing machine learning, will operate faster if the storage for those data sets is close to the application and readily available for retrieval.

Generally speaking, active cloud storage is a key part of delivering ideal OpEx as your app scales. You’re able to ensure that as you grow, and your user or app data grows along with you, your need to invest in storage capacity won’t hamper your scale. You pay for exactly what you use as you use it.

Whether you buy the argument here, or you’re just curious, it’s easy and free to test out adding this power and performance to your next project. Follow along below for a simple approach to get you started, then tell us what you think.

How to Connect an App Written in Go With Cloud Storage

Once you have your Go environment set up, you’re ready to start building code in your main Gopath’s directory ($GOPATH). This example builds a Go app that connects to Backblaze B2 Cloud Storage using the AWS S3 SDK.

Next, create a bucket to store content in. You can create buckets programmatically in your app later, but for now, create a bucket in the Backblaze B2 web interface, and make note of the associated server endpoint.

Now, generate an application key for the tool, scope bucket access to the the new bucket only, and make sure that “Allow listing all bucket names” is selected:


Make note of the bucket server connection and app key details. Use a Go module—for instance, this popular one, called godotenv—to make the configuration available to the app that will look in the app root for a .env (hidden) file.

Create the .env file in the app root with your credentials:

With configuration complete, build a package that connects to Backblaze B2 using the S3 API and S3 Go packages.

First, import the needed modules:

Then create a new client and session that uses those credentials:

And then write functions to upload, download, and delete files:

Now, put it all to work to make sure everything performs.

In the main test app, first import the modules, including godotenv and the functions you wrote:

Read in and reference your configuration:

And now, time to exercise those functions and see files upload and download.

For example, this extraordinarily compact chunk of code is all you need to list, upload, download, and delete objects to and from local folders:

If you haven’t already, run go mod init to initialize the module dependencies, and run the app itself with go run backblaze_example_app.go.

Here, a listResult has been thrown in after each step with comments so that you can follow the progress as the app lists the number of objects in the bucket (in this case, zero), upload your specified file from the dir_upload folder, then download it back down again to dir_download:

Use another tool like rclone to list the bucket contents independently and verify the file was uploaded:

Or, of course, look in the Backblaze B2 web admin:

And finally, looking in the local system’s dir_download folder, see the file you downloaded:

With that—and code at https://github.com/GiantRavens/backblazeS3—you have enough to explore further, connect to Backblaze B2 buckets with the S3 API, list objects, pass in file names to upload, and more.

Get Started With Go and Cloud Storage

With your app written in Go and connected to cloud storage, you’re able to grow at hyperscale. Happy hunting!

If you’ve already built an app with Go and have some feedback for us, we’d love to hear from you in the comments. And if it’s your first time writing in Go, let us know what you’d like to learn more about!

The post Development Roadmap: Power Up Apps With Go Programming Language and Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/development-roadmap-power-up-apps-with-go-programming-language-and-cloud-storage/feed/ 0
Block-level Deduplication, Compression, and Encrypted Backup With Duplicati https://www.backblaze.com/blog/duplicati-backups-cloud-storage/ https://www.backblaze.com/blog/duplicati-backups-cloud-storage/#comments Tue, 01 Dec 2020 16:53:16 +0000 https://www.backblaze.com/blog/?p=82396 Read this post to learn how to use Duplicati, an open-source backup client that can securely store encrypted, incremental, compressed backups in cloud storage.

The post Block-level Deduplication, Compression, and Encrypted Backup With Duplicati appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

When you’re responsible for backing up your organization’s servers, workstations, and NAS storage systems, choosing the best mix of protection vs. solution cost can feel like a high-wire act. One way to meet this challenge is considering how your backups are stored: In contrast to file-based, or storage sync backup solutions, traditional block-level backup paired with deduplication and compression will keep your data footprint as small and efficient as possible—enhancing your ability to back up everything you might want to.

With its block-level approach, Duplicati is one tool that, depending on your use case, can help you avoid picking and choosing which systems to protect. This post explores what Duplicati does, and how it might work for you.

Duplicati From 20,000 Feet

Duplicati is an open-source project that many IT administrators have found to meet the same specification of commercial backup solutions—including an excellent interface to manage and monitor underlying backup functions, and encryption so your backups can only be restored and accessed by you—without prohibitive complexity and pricing.

There are no support or maintenance contracts required, and no charge to use the software so you can install it on every system needing protection. When paired with a cloud storage solution, Duplicati can quickly and affordably deliver backup protection for every system you are responsible for.


Duplicati is a free, open-source backup client for macOS, Windows, and Linux that securely stores encrypted, incremental, compressed backups in cloud storage like Backblaze B2 Cloud Storage.

Duplicati Features at a Glance

Here’s a brief review of Duplicati’s key features to help in your evaluation. If you’re already sold on its functionality, skip ahead to the next section.

Installation and Architecture

As an open-source tool, Duplicati is free to download and use, does not require a maintenance contract, and yet is frequently updated with installers for macOS, Windows, and Linux systems, as well as for NAS storage systems such as Synology. Once installed, you access Duplicati’s features through a web interface, mobile devices, or via command-line, if you prefer. It’s a good practice to install the same tool on all of your servers, workstations, and NAS systems to make administration as easy and uniform as possible.

Backup Formats

Duplicati gathers all files to be backed up, deduplicates and compresses them, then sends them to your backup location in blocks or chunks to be stored for maximum efficiency. The first time you run a Duplicati backup job it will perform a full backup of your system, then each backup job after that will send incremental backups of the changed files, further sparing your bandwidth.

Security and Encryption

Backups stored using Duplicati can be encrypted with AES-256 encryption with custom pass-phrases, or integrate with your GPG security toolchain. This ensures that only your team can decrypt the backups and restore the files, preserving data security.

Other Advantages

While ease of use, storage format, and security will likely apply to just about every use case, Duplicati offers a number of other features that can lighten your administrative load and offer peace of mind. For instance, Duplicati lets you establish schedules to automate your backup jobs and how you’d like to be notified on job completion or alerted if there’s a problem. If you have a distributed, or highly mobile team, this can be hugely helpful. Additionally, Duplicati can also be set to periodically download a random set of backup files, restore them, and verify their integrity again. Any backup strategy should include a testing regimen, and Duplicati provides an easy way to set one up in your workflow.

Installing and Configuring Duplicati

This example outlines the installation and configuration of Duplicati using cloud storage for backups—in this case, Backblaze B2.

Begin by creating a dedicated storage bucket, and ensuring that it’s set to private.

Next, navigate to Backblaze B2, then App Keys, then Add a New Application Key. Once there, generate an Application Key and make sure that it has read and write access to the dedicated storage bucket you’ve made:


Before exiting, make a note of the bucket name, Application Key, and ID for your configuration work in Duplicati.

Configuring Your First Duplicati Backup

Duplicati’s manual will help guide you through installation, which begins with downloading the most recent build for your platform and following the wizard-style installation steps.

Once installed, Duplicati will ask you to set a user password, then present you with the main screen, ready to configure your first backup with Backblaze B2. Select Add Backup, then Add a New Backup, then Configure a New Backup.

Duplicati is organized around the principle of “tasks” that specify a backup job for a single system. When setting up a task, you’ll be asked to name the backup job, to name the folder that will hold your backups, to specify optional encryption settings and a passphrase, and then specify the target destination, in this case Backblaze B2.


Enter the credentials and bucket information you created earlier.

Be sure to test the connection before going further.

Click next to select the systems and folders to include in this backup task, then next again to fine tune how often and when you want this backup job to run.

Finally, save your backup job.

Now, back on the Duplicati home screen, you can see your defined backup job and inspect its progress:

As you can see when you navigate back to Backblaze, the actual backup blocks are compressed files that can only be restored and decrypted by you.


In only a few minutes, you’ve configured your first Duplicati backup task and are ready to start protecting the rest of your server, workstation, and NAS storage fleet.

Some Notes on Backing Up to Cloud Storage

As you’re considering where to store your backups—or as you’re trying to convince others of the value of off-site cloud storage—there are two things to keep in mind: First, using cloud storage will put you on the right side of your budget. Cloud storage allows you to keep your storage in operating expense, which means your budget can be scaled up and down as needed, and you don’t have money needlessly tied up in aging tech infrastructure. (You can read more about a CapEx vs. OpEx strategy, here.) Second, cloud storage helps you ensure that your backups are off-site to avoid any local system outage, and yet they also remain immediately accessible so that your recovery can begin immediately. If you liked the ease of working with Backblaze B2 in this example, you can learn more about the service, here.

Does Duplicati Meet Your Checklist?

Duplicati’s block-level backup, with deduplication, compression, and encryption could be exactly what your organization needs, and eliminating license, support, and maintenance costs are highly attractive as well. When paired with cloud storage like Backblaze B2, which also supports server-side encryption using AES-256, you can affordably protect every system in your organization.

Do these features meet your checklist? How was your experience getting it set up? Let us know in the comments.

The post Block-level Deduplication, Compression, and Encrypted Backup With Duplicati appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/duplicati-backups-cloud-storage/feed/ 4
Rclone Power Moves for Backblaze B2 Cloud Storage https://www.backblaze.com/blog/rclone-power-moves-for-backblaze-b2-cloud-storage/ https://www.backblaze.com/blog/rclone-power-moves-for-backblaze-b2-cloud-storage/#respond Tue, 03 Nov 2020 16:41:10 +0000 https://www.backblaze.com/blog/?p=96681 Learn about five advanced rclone techniques you can use with Backblaze B2 to help you on your path to storage admin mastery.

The post Rclone Power Moves for Backblaze B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

Rclone is described as the “Swiss Army chainsaw” of storage movement tools. While it may seem, at first, to be a simple tool with two main commands to copy and sync data between two storage locations, deeper study reveals a hell of a lot more. True to the image of a “Swiss Army chainsaw,” rclone contains an extremely deep and powerful feature set that empowers smart storage admins and workflow scripters everywhere to meet almost any storage task with ease and efficiency.


Rclone—rsync for cloud storage—is a powerful command line tool to copy and sync files to and from local disk, SFTP servers, and many cloud storage providers. Rclone’s Backblaze B2 Cloud Storage page has many examples of configuration and options with Backblaze B2.

Continued Steps on the Path to rclone Mastery

In our in-depth webinar with Nick Craig-Wood, developer and principal maintainer of rclone, we discussed a number of power moves you can use with rclone and Backblaze B2. This post takes it a number of steps further with five more advanced techniques to add to your rclone mastery toolkit.
Have you tried these and have a different take? Just trying them out for the first time? We hope to hear more and learn more from you in the comments.

Use --track-renames to Save Bandwidth and Increase Data Movement Speed

If you’re moving files constantly from disk to the cloud, you know that your users frequently re-organize and rename folders and files on local storage. Which means that when it’s time to back up those renamed folders and files again, your object storage will see the files as new objects and will want you to re-upload them all over again.

Rclone is smart enough to take advantage of Backblaze B2 Native APIs for remote copy functionality, which saves you from re-uploading files that are simply renamed and not otherwise changed.

By specifying the --track-renames flag, rclone will keep track of file size and hashes during operations. When source and destination files match, but the names are different, rclone will simply copy them over on the server side with the new name, saving you having to upload the object again. Use the --progress or --verbose flags to see these remote copy messages in the log.

rclone sync /Volumes/LocalAssets b2:cloud-backup-bucket \
–track-renames –progress –verbose

2020-10-22 17:03:26 INFO : customer artwork/145.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork//159.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork/163.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork/172.jpg: Copied (server side copy)
2020-10-22 17:03:26 INFO : customer artwork/151.jpg: Copied (server side copy)

With the --track-renames flag, you’ll see messages like these when the renamed files are simply copied over directly to the server instead of having to re-upload them.

 

Easily Generate Formatted Storage Migration Reports

When migrating data to Backblaze B2, it’s good practice to inventory the data about to be moved, then get reporting that confirms every byte made it over properly, afterwards.
For example, you could use the rclone lsf -R command to recursively list the contents of your source and destination storage buckets, compare the results, then save the reports in a simple comma-separated-values (CSV) list. This list is then easily parsable and processed by your reporting tool of choice.

rclone lsf --csv --format ps amzns3:/customer-archive-source
159.jpg,41034
163.jpg,29291
172.jpg,54658
173.jpg,47175
176.jpg,70937
177.jpg,42570
179.jpg,64588
180.jpg,71729
181.jpg,63601
184.jpg,56060
185.jpg,49899
186.jpg,60051
187.jpg,51743
189.jpg,60050

rclone lsf --csv --format ps b2:/customer-archive-destination
159.jpg,41034
163.jpg,29291
172.jpg,54658
173.jpg,47175
176.jpg,70937
177.jpg,42570
179.jpg,64588
180.jpg,71729
181.jpg,63601
184.jpg,56060
185.jpg,49899
186.jpg,60051
187.jpg,51743
189.jpg,60050

Example CSV output of file names and file hashes in source and target folders.

You can even feed the results of regular storage operations into a system dashboard or reporting tool by specifying JSON output with the --use-json-log flag.

In the following example, we want to build a report listing missing files in either the source or the destination location:

The resulting log messages make it clear that the comparison failed. The JSON format lets me easily select log warning levels, timestamps, and file names for further action.

{“level”:”error”,”msg”:”File not in parent bucket path customer_archive_destination”,”object”:”216.jpg”,”objectType”:”*b2.Object”,”source”:”operations
/check.go:100″,”time”:”2020-10-23T16:07:35.005055-05:00″}
{“level”:”error”,”msg”:”File not in parent bucket path customer_archive_destination”,”object”:”219.jpg”,”objectType”:”*b2.Object”,”source”:”operations
/check.go:100″,”time”:”2020-10-23T16:07:35.005151-05:00″}
{“level”:”error”,”msg”:”File not in parent bucket path travel_posters_source”,”object”:”.DS_Store”,”objectType”:”*b2.Object”,”source”:”operations
/check.go:78″,”time”:”2020-10-23T16:07:35.005192-05:00″}
{“level”:”warning”,”msg”:”12 files missing”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:225″,”time”:”2020-10-23T16:07:35.005643-05:00″}
{“level”:”warning”,”msg”:”1 files missing”,”object”:”parent bucket path travel_posters_source”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:228″,”time”:”2020-10-23T16:07:35.005714-05:00″}
{“level”:”warning”,”msg”:”13 differences found”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:231″,”time”:”2020-10-23T16:07:35.005746-05:00″}
{“level”:”warning”,”msg”:”13 errors while checking”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:233″,”time”:”2020-10-23T16:07:35.005779-05:00″}
{“level”:”warning”,”msg”:”28 matching files”,”object”:”parent bucket path customer_archive_destination”,”objectType”:”*b2.Fs”,”source”:”operations
/check.go:239″,”time”:”2020-10-23T16:07:35.005805-05:00″}
2020/10/23 16:07:35 Failed to check with 14 errors: last error was: 13 differences found

Example: JSON output from rclone check command comparing two data locations.

 

Use a Static Exclude File to Ban File System Lint

While rclone has a host of flags you can specify on the fly to match or exclude files for a data copy or sync task, it’s hard to remember all the operating system or transient files that can clutter up your cloud storage. Who hasn’t had to laboriously delete macOS’s hidden folder view settings (.DS_Store), or Window’s ubiquitous thumbnails database from your pristine cloud storage?

By building your own customized exclude file of all the files you never want to copy, you can effortlessly exclude all such files in a single flag to consistently keep your storage buckets lint free.
In the following example, I saved a text file under my user directory’s rclone folder and call it with --exclude-from rather than using --exclude (as I would if filtering on the fly):

rclone sync /Volumes/LocalAssets b2:cloud-backup-bucket \
–exclude-from ~/.rclone/exclude.conf

.DS_Store
.thumbnails/**
.vagrant/**
.gitignore
.git/**
.Trashes/**
.apdisk
.com.apple.timemachine.*
.fseventsd/**
.DocumentRevisions-V100/**
.TemporaryItems/**
.Spotlight-V100/**
.localization/**
TheVolumeSettingsFolder/**
$RECYCLE.BIN/**
System Volume Information/**

Example of exclude.conf that lists all of the files you explicitly don’t want to ever sync or copy, including Apple storage system tags, Trash files, git files, and more.

 

Mount a Cloud Storage Bucket or Folder as a Local Disk

Rclone takes your cloud-fu to a truly new level with these last two moves.

Since Backblaze B2 is active storage (all contents are immediately available) and extremely cost-effective compared to other media archive solutions, it’s become a very popular archive destination for media.

If you mount extremely large archives as if they were massive, external disks on your server or workstation, you can make visual searching through object storage, as well as a whole host of other possibilities, a reality.

For example, suppose you are tasked with keeping a large network of digital signage kiosks up-to-date. Rather than trying to push from your source location to each and every kiosk, let the kiosks pull from your single, always up-to-date archive in Backblaze!

With FUSE installed on your system, rclone can mount your cloud storage to a mount point on your system or server’s OS. It will appear instantly, and your OS will start building thumbnails and let you preview the files normally.

rclone mount b2:art-assets/video ~/Documents/rclone_mnt/

Almost immediately after mounting this cloud storage bucket of HD and 4K video, macOS has built thumbnails, and even lets me preview these high-resolution video files.

Behind the scenes, rclone’s clever use of VFS and caching makes this magic happen. You can tweak settings to more aggressively cache the object structure for your use case.

Serve Content Directly From Cloud Storage With a Pop-up Web or SFTP Server

Many times, you’re called on to give users temporary access to certain cloud files quickly. Whether it’s for an approval, a file hand off, or whatever, this requires thinking about how to get the file to a place where the user can have access to it with tools they know how to use. Trying to email a 100GB file is no fun, and spending the time to download and move it to another system that the user can access can take up a lot of time.

Or perhaps you’d like to set up a simple, uncomplicated way to let users browse a large PDF library of product documents. Instead of moving files to a dedicated SFTP or web server, simply serve them directly from your cloud storage archive with rclone using a single command.

Rclone’s serve command can present your content stored with Backblaze via a range of protocols as easy for users to access as a web browser—including FTP, SFTP, WebDAV, HTTP, HTTPS, and more.

In the following example, I export the contents of the same folder of high-resolution video used above and present it using the WebDAV protocol. With zero HTML or complicated server setups, my users instantly get web access to this content, and even a searchable interface:

rclone serve b2:art_assets/video
2020/10/23 17:13:59 NOTICE: B2 bucket art_assets/video: WebDav Server started on http://127.0.0.1:8080/

Immediately after exporting my cloud storage folder via WebDAV, users can browse to my system and search for all “ProRes” files and download exactly what they need.

For more advanced needs, you can choose the HTTP or HTTPS option and specify custom data flags that populate web page templates automatically.

Continuing Your Study

Combined with our rclone webinar, these five moves will place you well on your path to rclone storage admin mastery, letting you confidently take on complicated data migration tasks with an ease and efficiency that will amaze your peers.

We look forward to hearing of the moves and new use cases you develop with these tools.

The post Rclone Power Moves for Backblaze B2 Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/rclone-power-moves-for-backblaze-b2-cloud-storage/feed/ 0
Not So Suite: Dealing With Google’s New 2TB Caps https://www.backblaze.com/blog/not-so-suite-dealing-with-googles-new-2tb-caps/ https://www.backblaze.com/blog/not-so-suite-dealing-with-googles-new-2tb-caps/#comments Thu, 08 Oct 2020 23:38:04 +0000 https://www.backblaze.com/blog/?p=96357 If you or your team has been using G Suite to store your data, you may be looking for an unlimited data storage option. Backblaze B2 Cloud Storage provides unlimited data storage at only $5/TB per month.

The post Not So Suite: Dealing With Google’s New 2TB Caps appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

It’s easy to get used to “all you can eat” data plans—and one of the biggest justifications to use G Suite until now was that users could store as much as they wanted. But when we have unlimited data, we tend to forget about how much our content is growing until someone tells us our unlimited data plan is now… limited?

So it was a bit of a shock for lots of G Suite users to learn that they now only get 2TB per user for their $12 per user per month plan.

Hat tip to Jacob Hands who alerted us about this on Twitter!

G Suite users have to upgrade to the Enterprise class of service to retain unlimited storage. It’s unclear how much that costs because their pricing chart refers you to a sales representative if you want to get a quote. But as is true in restaurants: If you need to ask, it’s probably more expensive than you’d care to know.

If you’ve been using G Suite for long, and especially if you work with large data sets or rich media, you’re probably using more than 2TB per user. You’re going to need a plan to not only reduce your storage footprint on Google, but also safely store the content you’re forced to move while making it available and useful for your users. What do you do?

Side Note: Backblaze has proudly offered unlimited backup plans at a fixed price for close to 14 years, and we’ll continue to do so. This article focuses on solutions for teams using
G Suite for collaboration. If you just need a solid backup, check out our guide on backing up your G Suite data. If you’re looking for an incredible cloud storage offering, read on to learn about Backblaze B2 Cloud Storage.

Take Control of Your Shared User Content

Good question. You can make the largest reduction quickly by shifting videos, image libraries, and data sets out of Google Drive and into Backblaze B2 Cloud Storage.

Backblaze B2, of course, is our easy-to-use cloud storage that stores everything you want to protect at only $5/TB per month, and it makes everything you store there immediately available to you, the instant you need it.

Getting started is as simple as signing up, then you can upload files and browse them in Backblaze’s web interface, or use any one of hundreds of solutions that incorporate Backblaze B2 seamlessly, such as the popular (and free) Cyberduck SFTP file browser.

With your Backblaze B2 account set up, it’s time to start pruning files in Google Drive and preparing them for transfer!

Step One: Take an Inventory of What You Have in Google Drive

Back in Google Drive, organize your efforts by file size. In other words, move the biggest stuff first. The simplest way to uncover large files is to use a not so obvious search feature to organize by file type: ZIP archives, videos, and photos will almost surely be filling the most space. To select files by type, click the tiny triangle at the right of the search field to reveal a file type dropdown.

Using Google Drive’s search field, and the dropdown triangle, you can specify large files to move manually.

Step Two: Migrate Your Data to Backblaze B2

Now it’s time to carry out your plan: Download your data from G Suite, copy to Backblaze B2 Cloud Storage Buckets, and get your data organized! Once your files are safely downloaded, and uploaded to your Backblaze B2 account, they’re safe to remove from Google Drive.

Welcome!

Backblaze B2 is a powerful and flexible way to protect and organize your users’ files that can grow to any size you need—and makes sure that you only pay for the storage you actually use. We hope you’ll join us—we look forward to protecting your content and helping you serve your users!

The post Not So Suite: Dealing With Google’s New 2TB Caps appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/not-so-suite-dealing-with-googles-new-2tb-caps/feed/ 6
Amazon Drive and Third Parties—Derailed https://www.backblaze.com/blog/amazon-drive-and-third-parties-derailed/ https://www.backblaze.com/blog/amazon-drive-and-third-parties-derailed/#respond Wed, 26 Aug 2020 23:11:26 +0000 https://www.backblaze.com/blog/?p=95931 If you ever used Amazon Drive for storing files or photos, it’s a good time to think about how to transition your content to a new platform.

The post Amazon Drive and Third Parties—Derailed appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

If you ever used Amazon Drive for storing files or photos, it’s a good time to think about how to transition your content to a new platform—especially if you had been backing up your Synology NAS system!

Today, Amazon notified Amazon Drive and Amazon Photo customers that beginning November 1st, only Amazon’s proprietary web and mobile apps will be able to access your files.

This means, for example, that Synology users who had relied on Synology Cloud Sync or HyperBackup to back up their systems to Amazon Drive will have their access shut off via those tools.

Getting Back on Track

If you’re still using Amazon Drive to store general files, and Amazon Photos to store photos, you might be wondering how to protect that content with a tool that you prefer before the November 1st deadline hits.

1. Recover Your Content

Your first task will be to recover all of your content from Amazon Drive. Via their website, download the Amazon Photos app. Install, and select “Download” to download all of your Amazon Drive content locally. To download photos stored in Amazon, you may find it helpful to click “Home,” then “Photos Backed Up” will take you to a webpage that lets you download photos directly, say, by year.

First, recover all of your content from Amazon Photos and Amazon Drive.

2. Welcome to Your New Platform

With your content stored locally, we invite you to try Backblaze B2 Cloud Storage for unlimited storage for all of your files and photos at a better price than Amazon Drive.

Sign up for your Backblaze B2 account first. Your first 10GB of storage every month is free, and beyond that is only $5 per terabyte of storage per month, vs. Amazon’s $6.99 for a terabyte of storage.

3. Choose the Tool That Fits How You Work

Best of all, with Backblaze B2 you have a choice of over 60 solutions to connect to your new account!

If you’re a Synology user, you can keep using Synology Cloud Sync or HyperBackup to back up your files—simply select your new Backblaze account instead of Amazon Drive.

If you prefer graphical tools that help present your cloud storage as files and folders, Cyberduck is a great choice, and Mountain Duck will even mount your Backblaze B2 account as drives on your Mac or Windows system.

You can browse our guides for all integration tools here.

Cyberduck connected to your Backblaze B2 account makes it as simple as browsing files and folders to upload and download your files.
Mountain Duck will even mount your Backblaze B2 cloud storage on your computer as a drive—here showing a thumbnail of an 8K video clip.

And if you prefer command-line tools, rclone is an excellent choice, as is Backblaze’s own command-line tool.

For more information about using rclone, join our webinar, “Tapping the Power of Cloud Copy & Sync with Rclone” on September 17th. Rclone’s creator, Nick Craig-Wood, will explain how to use its simple command line interface to:

  • Optimize your copy/sync in line with best practices
  • Mirror storage for security without adding complexity
  • Transfer data reliably despite limited bandwidth and/or intermittent connection

Whichever tool you choose, getting it set up is as simple as visiting your Backblaze B2 Account Page, generating an Application Key, then entering the Application Key ID and Application Key in your new tool’s configuration settings.

4. Protect, and Access Your Content Freely

With your tool of choice configured, it’s time to move your local content to your new Backblaze B2 Cloud Storage.

Welcome!

We hope you’ll join us—we look forward to protecting your files and photos!

The post Amazon Drive and Third Parties—Derailed appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/amazon-drive-and-third-parties-derailed/feed/ 0
Media Stats 2019: Top Takeaways From iconik’s New Report https://www.backblaze.com/blog/media-stats-2019-top-takeaways-from-iconiks-new-report/ https://www.backblaze.com/blog/media-stats-2019-top-takeaways-from-iconiks-new-report/#respond Thu, 14 May 2020 15:58:24 +0000 https://www.backblaze.com/blog/?p=95563 COVID-19 is rightfully filling the news cycle, but if you’re coming to our blog, you’re probably looking for some information about cloud storage. Today we're sharing our top takeaways from iconik's Media Stats 2019 report.

The post Media Stats 2019: Top Takeaways From iconik’s New Report appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

Recently, the team at iconik, a popular cloud-based content management and collaboration app, released a stats-driven look at how their business has grown over the past year. Given that we just released our Q1 Hard Drive Stats, we thought now was a good time to salute our partners at iconik for joining us in sharing business intelligence to help our industries grow and progress.

Their report is a fascinating look inside a disruptive business that is a major driver of growth for Backblaze B2 Cloud Storage. With that in mind, we wanted to share our top takeaways from their report and highlight key trends that will dramatically impact businesses soon—if they haven’t already.

➔ Download Our Media Workflows E-book

Takeaway 1: Workflow Applications in the Cloud Unlock Accelerated Growth

iconik doubled all assets in the final quarter of 2019 alone.

Traditional workflow apps thrive in the cloud when paired with active, object storage.

We’ve had many customers adopt iconik with Backblaze B2, including Everwell, Fin Films, and Complex Networks, among several others. Each of these customers not only quickly converted to an agile, cloud-enabled workflow, they also immediately grew their use of cloud storage as the capacities it unlocked fueled new business. As such, it’s no surprise that iconik is growing fast, doubling all assets in Q4 2019 alone.

iconik is a prime example of an application that was traditionally installed on physical servers and storage in a facility. A longtime frustration with such systems is trying to ‘right-size’ the amount of server horsepower and storage to allocate to the system. Given how quickly content grows, making the wrong storage choice could be incredibly costly, or incredibly disruptive to your users as the system ‘hits the wall’ of capacity and the storage needs to be expanded frequently.

By moving the entire application to the cloud, users get the best of all worlds: a responsive and immersive application that keeps them focused on collaboration and production tasks, protection for the entire content library while keeping it immediately retrievable, and seamless growth to any size needed without any disruptions.

And these are only the benefits of moving your storage solution to the cloud. Almost every other application in your workflow that traditionally needs on-site servers and storage can be similarly shifted to the cloud, lending benefits like “pay-as-you-use-it” cost models, access from everywhere, and the ability to extend features with other cloud delivered services like transcoding, machine learning, AI services, and more. (Our own B2 Cloud Storage service just launched S3 Compatible APIs, which allows infinitely more solutions for diverse workflows.)

Takeaway 2: Now, Every Company Is a Media Company

41% of iconik’s customer base are not from traditional media and entertainment entities.

Every company benefits by leveraging the power of collaboration and content management in their business.

Every company generates massive amounts of rich content, including graphics, video, product and sales literature, training videos, social media clips, and more. And every company fights ‘content sprawl’ as documents are duplicated, stored on different department’s servers, and different versions crop up. Keeping that content organized and ensuring that your entire organization has perfect access to the up-to-the-minute changes in all of it is easily done in iconik, and now accounts for 41% of their customers.

Even if your company is not an ad agency, or involved in film and television production, thinking and moving like a content producer and organizing around efficient and collaborative storytelling can transform your business. By doing so, you will immediately improve how your company creates, organizes, and updates the content that carries your image and story to your end users and customers. The end result is faster, more responsive, and cleaner messaging to your end users.

Takeaway 3: Solve For Video First

Video is 17.67% of all assets in iconik—but 78.36% of storage used.

Make sure your workflow tools and storage are optimized for video first to head off future scaling challenges.

Despite being a small proportion of content in iconik’s system, video takes up the most storage.
While most customers have large libraries of HD or even SD content now, 4K size video is rapidly gaining ground as it becomes the default resolution.

Video files have traditionally been the hardest element of a workflow to balance. Most shared storage systems can serve several editors working on HD streams, but only one or two 4K editors. So a system that proves that it can handle larger video files seamlessly will be able to scale as these resolution sizes continue to grow.

If you’re evaluating changes in your content production workflow, make sure that it can handle 4K video sizes and above, even if you’re predominantly managing HD content today.

Takeaway 4: Hybrid Cloud Needs to Be Transparent

47% of content stored locally, 53% in cloud storage.

Great solutions transparently bridge on-site and cloud storage, giving you the best features of each.

iconik’s report calls out the split of the storage location for assets it stores—whether on-site, or in the cloud. But the story behind the numbers reveals a deeper message.

Where assets are stored as part of a hybrid-cloud solution is a bit more complex. Assets in heavy use may exist locally only, while others might be stored on both local storage and the cloud, and the least often used assets might exist only in the cloud. And then, many customers choose to forego local storage completely and only work with content stored in the cloud.

While that may sound complex, the power of iconik’s implementation is that users don’t need—and shouldn’t need to know—about all that complexity. iconik keeps a single reference to the asset no matter how many copies there are, or where they are stored. Creative users simply use the solution as their interface as they move their content through production, internal approval, and handoff.

Meanwhile, admin users can easily make decisions about shifting content to the cloud, or move content back from cloud storage to local storage. This means that current projects are quickly retrieved from local storage, then when the project is finished the files can move to the cloud, freeing up space on local storage for other active projects.

For customers working with Backblaze B2, the cloud storage expands to whatever size needed on a simple, transparent pricing model. And it is fully active, or in other words, it’s immediately retrievable within the iconik interface. In this way it functions as a “live” archive as opposed to offline content archives like LTO tape libraries, or a cold storage cloud which could require days for file retrieval. As such, using ‘active’ cloud storage like Backblaze B2 eases the admin’s decision-making process about what to keep, and where to keep it. With transparent cloud storage, they have the insight needed to effectively scale their data.

Looking into Your (Business) Future

iconik’s report confirms a number of trends we’ve been seeing as every business comes to terms with the full potential and benefits of adopting cloud-based solutions:

  • The dominance of video content.
  • The need for transparent reporting and visibility of the location of data.
  • The fact that we’re all in the media business now.
  • And that cloud storage will unlock unanticipated growth.

Given all we can glean from this first report, we can’t wait for the next one.

But don’t take our word for it, you should dig into their numbers and let us and iconik know what you think. Tell us how these takeaways might help your business in the coming year, or where we might have missed something. We hope to see you in the comments.

The post Media Stats 2019: Top Takeaways From iconik’s New Report appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/media-stats-2019-top-takeaways-from-iconiks-new-report/feed/ 0