Hana Database Size

  



Amazon Web Services (AWS) and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory computing platform on AWS. With SAP HANA on AWS you can:

After a database crash, a sap hana tenant database can rollback to a consistent point in time, out of elements and data stored on disk. Monitoring data size on disk is important for several reasons. First hard disk memory space maintenance has to be done or host server may come across space shortage at.

  • A backup catalog size SQL script might be very usefull. Sap hana backup catalog records all information about backup details. All historical details are kept until it is purged. If details are not removed from the backup catalog, the file keeps on growing. The backup catalog is automatically saved for every single backup. Too many backups at the same time and a huge backup catalog file will.
  • After a database crash, a sap hana tenant database can rollback to a consistent point in time, out of elements and data stored on disk. Monitoring data size on disk is important for several reasons. First hard disk memory space maintenance has to be done or host server may come across space shortage at some stage.
  • Achieve faster time to value - Provision infrastructure for SAP HANA in hours versus weeks or months.
  • Scale infrastructure resources - As your data requirements increase over time so does your AWS environment.
  • Reduce cost - Pay for only the infrastructure resources that you need and use.
  • Bring your own license - Leverage your existing licensing investment with no additional licensing fees.
  • Achieve a higher level of availability - Combine Amazon EC2 Auto Recovery, multiple Availability Zones, and SAP HANA System Replication (HSR).

For the first time, customers have the ability to leverage scale-out setups for their SAP S/4HANA workloads in the cloud and take advantage of the innovation of the AWS Nitro system, a combination of purpose-built hardware and software components that provide the performance, security, isolation, elasticity, and efficiency of the infrastructure that powers Amazon EC2 instances. You can now scale out up to four nodes, totaling 48 TB of memory, for extremely large S/4HANA deployments.

To learn more see Announcing support for extremely large S/4HANA deployments on AWS

Hana database size on disk

Overview: On-demand infrastructure for SAP HANA using Bring Your Own Software and Bring Your Own License (BYOL) models for SAP HANA

Size

Supported use cases: Production and non-production

Supported HANA scenarios:

- Native HANA applications
- Data marts / analytics / big data
- S/4HANA
- BW/4HANA
- Business Suite on HANA
- Business Warehouse and BPC on HANA
- Business One on HANA

Licensing: Bring Your Own License

Memory:

Hana

- OLTP scale-up: up to 24 TB
- SAP S/4HANA scale-out: up to 48 TB
- OLAP scale-up: up to 18 TB
- OLAP scale-out: up to 100 TB

Overview: A streamlined version of SAP HANA that is free to use for in-memory databases up to 32 GB. On-demand licenses for 64 GB - 128 GB memory sizes available for purchase in the AWS Marketplace.

Supported use cases: Production and non-production

Supported HANA scenarios:

- Native HANA applications
- Data marts / analytics / big data

Licensing: Free for up to 32 GB. Licenses for 64 GB - 128 GB available for purchase in the AWS Marketplace.

Memory: 32 GB | 64 GB | 128 GB | 256 GB

Learn more >>

Overview: Free software trials of SAP HANA and SAP HANA-based solutions offered by SAP through the SAP Cloud Appliance Library.

Supported use cases: Trials

Licensing: Free trial license provided by SAP; customer pays for the AWS resources used during the trial period.

Overview
AWS provides SAP customers and partners with SAP-certified AWS Cloud infrastructure to run SAP HANA. With AWS, SAP HANA infrastructure can be rapidly provisioned without needing to make any capital investments or long-term commitments. SAP HANA on AWS can be deployed on either the SUSE Linux Enterprise Server (SLES) or the Red Hat Enterprise Linux (RHEL) operating system.

EC2 High Memory instances offer 6, 9, 12, 18, and 24 TB of memory in an instance. These instances are purpose-built to run large in-memory databases, including production deployments of the SAP HANA in-memory database, in the cloud. EC2 High Memory instances allow you to run large in-memory databases and business applications that rely on these databases in the same, shared Amazon Virtual Private Cloud (VPC), reducing the management overhead associated with complex networking and ensuring predictable performance.

For additional information about the Amazon EC2 instance types certified for SAP HANA, see the SAP HANA Certified IaaS Platforms Directory.

Deployment
The SAP HANA on AWS Quick Start provides an automated process to deploy fully functional SAP HANA systems on AWS, following best practices from AWS and SAP. The Quick Start ensures that AWS services and the operating system (SLES or RHEL) are optimally configured to achieve the best performance for your SAP HANA system.

Licensing
SAP HANA BYOL on AWS uses a Bring Your Own License model for the SAP HANA license. SAP customers can use their existing or new SAP HANA licenses to run SAP HANA on AWS. SLES and RHEL operating system licenses are provided by AWS, and their relevant license fees are combined with the base hourly fee of the respective Amazon EC2 instance type.

Supported use cases
SAP HANA BYOL on AWS is supported for both production and non-production use cases.

Supported HANA scenarios
The following HANA scenarios are supported by SAP for production on AWS.

- SAP BW/4HANA
- SAP Business Warehouse (BW) and Business Planning and Consolidation (BPC) on HANA
- Native SAP HANA applications
- Native SAP HANA data marts and analytics
- SAP S/4HANA
- SAP Business Suite (ERP, CRM, etc.) on HANA
- SAP HANA Live / Sidecar
- SAP Business One, version for SAP HANA

Memory sizes available

  • Scale-out - OLAP workloads like data marts, analytics, SAP BW/4HANA, SAP BW, and SAP BPC are supported on multi-node / scale-out configurations providing up to 100 TB of memory, when using the x1e.32xlarge instance type.
  • Scale-up - OLTP workloads like SAP Business Suite applications (e.g., ERP) and SAP S/4HANA are supported on single-node / scale-up configurations with up to 24 TB of memory.
  • S/4HANA - SAP S/4HANA on AWS is supported on single node / scale-up configurations with up to 24 TB of memory and multi-node / scale-out configurations with up to 48 TB of memory.

For additional information about the Amazon EC2 instance types certified for SAP HANA, see the SAP HANA Certified IaaS Platforms Directory.

AWS Region availability
To see the list of Amazon EC2 instance types certified for SAP and the AWS Regions they are avilable in, see Amazon EC2 Instance Types for SAP.

Pricing
The following tables provide estimates of sample SAP HANA configurations on AWS. To estimate the pricing for a multi-node / scale-out SAP HANA cluster, multiply the cost of a single SAP HANA node configuration with the number of nodes required. For additional information about estimating AWS infrastructure cost for SAP solutions, see the SAP on AWS Pricing Guide.


SAP HANA Server only
1 Amazon EC2 instance for the SAP HANA DB platform
Example scenarios: Native HANA applications | Data marts

EC2 instance typevCPUMemory
(GiB)
Supported
for production**
Monthly
cost***
r4.2xlarge861NoSee estimate
x1e.xlarge4122NoSee estiamte
r4.4xlarge16122NoSee estimate
x1e.2xlarge8244NoSee estiamte
r3.8xlarge32244YesSee estimate
r4.8xlarge32244YesSee estimate
r5.12xlarge48384YesSee estimate
x1e.4xlarge16488NoSee estimate
r4.16xlarge64488YesSee estimate

r5.8xlarge

32

256

Yes

r5.16xlarge

64

512

Yes

r5.24xlarge96768YesSee estimate
r5.metal96768YesSee estimate
x1.16xlarge64976YesSee estimate
x1.32xlarge1281,952YesSee estimate
x1e.32xlarge1283,904YesSee estimate
u-6tb1.metal448*6,144YesSee estimate
u-9tb1.metal448*9,216YesSee estimate
u-12tb1.metal448*12,288YesSee estimate
u-18tb1.metal

448*

18,432

Yes

u-24tb1.metal

448*

24,576

Yes

*Each logical processor is a hyperthread on 224 CPU cores.

Hana Database Size

**For additional information about the different AWS SAP HANA configurations supported by SAP for production, see SAP note #1964437 (access to the SAP Support Portal is required to view SAP notes).

***Monthly cost will be based on your actual usage of AWS services, and will vary from the estimates provided above. For additional information, see the SAP on AWS Pricing Guide.

Windows xp cd key finder. SAP HANA Server + NetWeaver Application Server
1 Amazon EC2 instance for the SAP HANA DB platform + 1 Amazon EC2 instance for the SAP NetWeaver Application Server
Example scenarios: Business Suite on HANA | S/4HANA | BW/4HANA | Business Warehouse on HANA

HANA system

NetWeaver AS system

EC2
instance type

vCPU

Memory
(GiB)

Supported for
production**

EC2
Instance Type

vCPU

Memory
(GiB)

Monthly
cost***

x1e.xlarge

4

122

No

r5.large

2

16

r4.4xlarge

16

122

No

r5.large

2

16

x1e.2xlarge

8

244

No

r5.xlarge

4

32

r3.8xlarge

32

244

Yes

r5.xlarge

4

ATN Bangla (Asian Television Network) is a Bengali-language digital cable television channel. It transmits from its studio in Dhaka, Bangladesh.This is the first satellite based channel in Bangladesh. The channel is transmitted in South Asia, the Middle East, Europe, and North America. The channel offers a wide variety of programming including news, movies, dramas, talk shows, and more. Atn live tv apk gratis. Ariana TV Live Android latest 1.0.5 APK Download and Install. Watch Ariana Television Network live stream on your smartphone. Download our free iOS and Android apps, fill in your authorisation details and you are ready to watch your favorite program anywhere through your 3G, LTE or WiFi connection. In a car, in a bus, in a plane or even at the beatch relaxing! Step by Step Activation; Step 1: Sign in. Open ATN Live TV app and when prompt fill in your email.

32

r4.8xlarge

32

244

Yes

r5.xlarge

4

32

r5.8xlarge

32

256

Yes

r5.xlarge

4

32

r5.12xlarge

48

384

Yes

r5.2xlarge

8

64

x1e.4xlarge

16

488

No

r5.2xlarge

8

64

r4.16xlarge

64

488

Yes

r5.2xlarge

8

64

r5.24xlarge

96

768

Yes

r5.2xlarge

8 Hack wifi windows xp.

64

r5.metal

96

768

Yes

r5.2xlarge

8

64

x1.16xlarge

64

976

Yes

r5.2xlarge

8

64

x1.32xlarge

128

1,952

Yes

r5.2xlarge

8

64

x1e.32xlarge

128

3,904

Yes

r5.2xlarge

8

64

u-6tb1.metal

448*

6,144

Yes

r5.4xlarge

16

128

u-9tb1.metal

448*

9,216

Yes

r5.4xlarge

16

128

u-12tb1.metal

448*

12,288

Yes

r5.4xlarge

16

128

u-18tb1.metal

448*

18,432

Yes

r5.4xlarge

16

128

u-24tb1.metal

448*

24,576

Yes

r5.4xlarge

16

128

* Each logical processor is a hyperthread on 224 CPU cores.

**For additional information about the different AWS SAP HANA configurations supported by SAP for production, see SAP note #1964437 (access to the SAP Support Portal is required to view SAP notes).

***Monthly cost will be based on your actual usage of AWS services, and will vary from the estimates provided above. For additional information, see the SAP on AWS Pricing Guide.

Purchasing options
Amazon Elastic Compute Cloud (Amazon EC2) offers multiple purchasing options for EC2 instances. The two most relevant options for an SAP HANA system are On-Demand Instances and Reserved Instances. (See an overview of the different Amazon EC2 purchasing options.) The pricing estimates provided in the previous table are based on the 1 Year Term - No Upfront Payment - Reserved Instance purchasing option. If you want to compare the cost to an On-Demand Instance, you can change the Billing Option for the instance on the Services tab.

Pay only for what you use
With the On-Demand Instance purchasing option, you pay only for the hours when the instance is running. For a non-production system, you can reduce your cost significantly by running the system only during the hours it is required online. The pricing estimates provided in the previous table are based on 24x7 utilization. If you do not need to keep the system online 24x7, you can adjust the billing option (as described previously) and the utilization level in the AWS Pricing Calculator to see the effect on the monthly price.

SAP Business One, version for SAP HANA
SAP Business One, version for SAP HANA, has been certified by SAP for production deployment on AWS. Small businesses can access all the benefits of using AWS for SAP Business One, including lower costs, speed and agility, elasticity, and flexible capacity. For certified configurations, hosting options, and sample pricing, see the reference sheet. To deploy SAP Business One, version for SAP HANA, on AWS, use the automated Quick Start reference deployment.

How to get started
The SAP HANA on AWS Quick Start helps you deploy fully functional SAP HANA systems on AWS, following best practices from AWS and SAP. The Quick Start ensures that AWS services and the operating system (SLES or RHEL) are optimally configured to achieve the best performance for your SAP HANA system.

  • SAP HANA on AWS Overview
  • SAP HANA on AWS Quick Start
  • SAP HANA on AWS Whitepapers
-->

Azure NetApp Files provides native NFS shares that can be used for /hana/shared, /hana/data, and /hana/log volumes. Using ANF-based NFS shares for the /hana/data and /hana/log volumes requires the usage of the v4.1 NFS protocol. The NFS protocol v3 is not supported for the usage of /hana/data and /hana/log volumes when basing the shares on ANF.

Important

Size

The NFS v3 protocol implemented on Azure NetApp Files is not supported to be used for /hana/data and /hana/log. The usage of the NFS 4.1 is mandatory for /hana/data and /hana/log volumes from a functional point of view. Whereas for the /hana/shared volume the NFS v3 or the NFS v4.1 protocol can be used from a functional point of view.

Important considerations

When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware of the following important considerations:

Sap Hana Database Size

  • The minimum capacity pool is 4 TiB
  • The minimum volume size is 100 GiB
  • Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes are mounted, must be in the same Azure Virtual Network or in peered virtual networks in the same region
  • It is important to have the virtual machines deployed in close proximity to the Azure NetApp storage for low latency.
  • The selected virtual network must have a subnet, delegated to Azure NetApp Files
  • Make sure the latency from the database server to the ANF volume is measured and below 1 millisecond
  • The throughput of an Azure NetApp volume is a function of the volume quota and Service level, as documented in Service level for Azure NetApp Files. When sizing the HANA Azure NetApp volumes, make sure the resulting throughput meets the HANA system requirements
  • Try to “consolidate” volumes to achieve more performance in a larger Volume for example, use one volume for /sapmnt, /usr/sap/trans, … if possible
  • Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read Only, etc.).
  • Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
  • The User ID for sidadm and the Group ID for sapsys on the virtual machines must match the configuration in Azure NetApp Files.

Important

For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity.

Important

If there is a mismatch between User ID for sidadm and the Group ID for sapsys between the virtual machine and the Azure NetApp configuration, the permissions for files on Azure NetApp volumes, mounted to the VM, would be be displayed as nobody. Make sure to specify the correct User ID for sidadm and the Group ID for sapsys, when on-boarding a new system to Azure NetApp Files.

Sizing for HANA database on Azure NetApp Files

The throughput of an Azure NetApp volume is a function of the volume size and Service level, as documented in Service level for Azure NetApp Files.

Important to understand is the performance relationship the size and that there are physical limits for an LIF (Logical Interface) of the SVM (Storage Virtual Machine).

The table below demonstrates that it could make sense to create a large “Standard” volume to store backups and that it does not make sense to create a “Ultra” volume larger than 12 TB because the physical bandwidth capacity of a single LIF would be exceeded.

The maximum throughput for a LIF and a single Linux session is between 1.2 and 1.4 GB/s.

SizeThroughput StandardThroughput PremiumThroughput Ultra
1 TB16 MB/sec64 MB/sec128 MB/sec
2 TB32 MB/sec128 MB/sec256 MB/sec
4 TB64 MB/sec256 MB/sec512 MB/sec
10 TB160 MB/sec640 MB/sec1.280 MB/sec
15 TB240 MB/sec960 MB/sec1.400 MB/sec
20 TB320 MB/sec1.280 MB/sec1.400 MB/sec
40 TB640 MB/sec1.400 MB/sec1.400 MB/sec

It is important to understand that the data is written to the same SSDs in the storage backend. The performance quota from the capacity pool was created to be able to manage the environment.The Storage KPIs are equal for all HANA database sizes. In almost all cases, this assumption does not reflect the reality and the customer expectation. The size of HANA Systems does not necessarily mean that a small system requires low storage throughput – and a large system requires high storage throughput. But generally we can expect higher throughput requirements for larger HANA database instances. As a result of SAP's sizing rules for the underlying hardware such larger HANA instances also provide more CPU resources and higher parallelism in tasks like loading data after an instances restart. As a result the volume sizes should be adopted to the customer expectations and requirements. And not only driven by pure capacity requirements.

As you design the infrastructure for SAP in Azure you should be aware of some minimum storage throughput requirements (for productions Systems) by SAP, which translate into minimum throughput characteristics of:

Volume type and I/O typeMinimum KPI demanded by SAPPremium service levelUltra service level
Log Volume Write250 MB/sec4 TB2 TB
Data Volume Write250 MB/sec4 TB2 TB
Data Volume Read400 MB/sec6.3 TB3.2 TB

Since all three KPIs are demanded, the /hana/data volume needs to be sized toward the larger capacity to fulfill the minimum read requirements.

For HANA systems, which are not requiring high bandwidth, the ANF volume sizes can be smaller. And in case a HANA system requires more throughput the volume could be adapted by resizing the capacity online. No KPIs are defined for backup volumes. However the backup volume throughput is essential for a well performing environment. Log – and Data volume performance must be designed to the customer expectations.

Hana

Important

Independent of the capacity you deploy on a single NFS volume, the throughput, is expected to plateau in the range of 1.2-1.4 GB/sec bandwidth leveraged by a consumer in a virtual machine. This has to do with the underlying architecture of the ANF offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the article Performance benchmark test results for Azure NetApp Files were conducted against one shared NFS volume with multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP. Where we measure throughput from a single VM against an NFS volume. Hosted on ANF.

To meet the SAP minimum throughput requirements for data and log, and according to the guidelines for /hana/shared, the recommended sizes would look like:

VolumeSize
Premium Storage tier
Size
Ultra Storage tier
Supported NFS protocol
/hana/log/4 TiB2 TiBv4.1
/hana/data6.3 TiB3.2 TiBv4.1
/hana/shared scale-upMin(1 TB, 1 x RAM)Min(1 TB, 1 x RAM)v3 or v4.1
/hana/shared scale-out1 x RAM of worker node
per 4 worker nodes
1 x RAM of worker node
per 4 worker nodes
v3 or v4.1
/hana/logbackup3 x RAM3 x RAMv3 or v4.1
/hana/backup2 x RAM2 x RAMv3 or v4.1

Sap Hana Database Size Limit

For all volumes, NFS v4.1 is highly recommended

The sizes for the backup volumes are estimations. Exact requirements need to be defined based on workload and operation processes. For backups, you could consolidate many volumes for different SAP HANA instances to one (or two) larger volumes, which could have a lower service level of ANF.

Note

The Azure NetApp Files, sizing recommendations stated in this document are targeting the minimum requirements SAP expresses towards their infrastructure providers. In real customer deployments and workload scenarios, that may not be enough. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload.

Therefore you could consider to deploy similar throughput for the ANF volumes as listed for Ultra disk storage already. Also consider the sizes for the sizes listed for the volumes for the different VM SKUs as done in the Ultra disk tables already.

Tip

You can re-size Azure NetApp Files volumes dynamically, without the need to unmount the volumes, stop the virtual machines or stop SAP HANA. That allows flexibility to meet your application both expected and unforeseen throughput demands.

Documentation on how to deploy an SAP HANA scale-out configuration with standby node using NFS v4.1 volumes that are hosted in ANF is published in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server.

Availability

ANF system updates and upgrades are applied without impacting the customer environment. The defined SLA is 99.99%.

Volumes and IP addresses and capacity pools

With ANF, it is important to understand how the underlying infrastructure is built. A capacity pool is only a structure, which makes it simpler to create a billing model for ANF. A capacity pool has no physical relationship to the underlying infrastructure. If you create a capacity pool only a shell is created which can be charged, not more. When you create a volume, the first SVM (Storage Virtual Machine) is created on a cluster of several NetApp systems. A single IP is created for this SVM to access the volume. If you create several volumes, all the volumes are distributed in this SVM over this multi-controller NetApp cluster. Even if you get only one IP the data is distributed over several controllers. ANF has a logic that automatically distributes customer workloads once the volumes or/and capacity of the configured storage reaches an internal pre-defined level. You might notice such cases because a new IP address gets assigned to access the volumes.

##Log volume and log backup volumeThe “log volume” (/hana/log) is used to write the online redo log. Thus, there are open files located in this volume and it makes no sense to snapshot this volume. Online redo logfiles are archived or backed up to the log backup volume once the online redo log file is full or a redo log backup is executed. To provide reasonable backup performance, the log backup volume requires a good throughput. To optimize storage costs, it can make sense to consolidate the log-backup-volume of multiple HANA instances. So that multiple HANA instances leverage the same volume and write their backups into different directories. Using such a consolidation, you can get more throughput with since you need to make the volume a bit larger.

The same applies for the volume you use write full HANA database backups to.

Backup

Besides streaming backups and Azure Back service backing up SAP HANA databases as described in the article Backup guide for SAP HANA on Azure Virtual Machines, Azure NetApp Files opens the possibility to perform storage-based snapshot backups.

SAP HANA supports:

  • Storage-based snapshot backups from SAP HANA 1.0 SPS7 on
  • Storage-based snapshot backup support for Multi Database Container (MDC) HANA environments from SAP HANA 2.0 SPS4 on

Creating storage-based snapshot backups is a simple four-step procedure,

  1. Creating a HANA (internal) database snapshot - an activity you or tools need to perform
  2. SAP HANA writes data to the datafiles to create a consistent state on the storage - HANA performs this step as a result of creating a HANA snapshot
  3. Create a snapshot on the /hana/data volume on the storage - a step you or tools need to perform. There is no need to perform a snapshot on the /hana/log volume
  4. Delete the HANA (internal) database snapshot and resume normal operation - a step you or tools need to perform

Warning

Missing the last step or failing to perform the last step has severe impact on SAP HANA's memory demand and can lead to a halt of SAP HANA

This snapshot backup procedure can be managed in a variety of ways, using various tools. One example is the python script “ntaphana_azure.py” available on GitHub https://github.com/netapp/ntaphanaThis is sample code, provided “as-is” without any maintenance or support.

Caution

A snapshot in itself is not a protected backup since it is located on the same physical storage as the volume you just took a snapshot of. It is mandatory to “protect” at least one snapshot per day to a different location. This can be done in the same environment, in a remote Azure region or on Azure Blob storage.

For users of Commvault backup products, a second option is Commvault IntelliSnap V.11.21 and later. This or later versions of Commvault offer Azure NetApp Files Support. The article Commvault IntelliSnap 11.21 provides more information.

Back up the snapshot using Azure blob storage

Back up to Azure blob storage is a cost effective and fast method to save ANF-based HANA database storage snapshot backups. To save the snapshots to Azure Blob storage, the azcopy tool is preferred. Download the latest version of this tool and install it, for example, in the bin directory where the python script from GitHub is installed.Download the latest azcopy tool:

The most advanced feature is the SYNC option. If you use the SYNC option, azcopy keeps the source and the destination directory synchronized. The usage of the parameter --delete-destination is important. Without this parameter, azcopy is not deleting files at the destination site and the space utilization on the destination side would grow. Create a Block Blob container in your Azure storage account. Then create the SAS key for the blob container and synchronize the snapshot folder to the Azure Blob container.

For example, if a daily snapshot should be synchronized to the Azure blob container to protect the data. And only that one snapshot should be kept, the command below can be used.

Next steps

Hana Db Size Limit

Read the article: