Information2011. 10. 3. 01:43

Why not Exadata

  1. Oracle database only. If you want to run other apps such as another web servers, app servers or other databases like SQL Server or DB2, you can’t.
  2. 11g or greater only. If you have legacy versions of Oracle you must upgrade in order to run them.
  3. RAC only. You have to run RAC, even for a single instance DB, you run RAC with only one active instance. Technically I guess you could have a single node running standalone and not clustered, but I’ve never seen Oracle approve such a configuration
  4. Data protection requires ASM to perform mirroring. this means ASM is performing at least 2 or 3 (with double mirroring) writes, so writes are slower. Oracle performance numbers are based on reads.
  5. Oracle Flash Cache is read only, so all writes first get written to disk, then updated in cache if the data is in cache.  
  6. Not 5 nines availability. Data is mirrored across storage cells using ASM. If a storage node crashes and you are only using standard mirroring then you have 12 drives offline until the cell is repaired or restarted. That can be risky. Furthermore, when applying a patch each cell is brought offline for patching, making you vulnerable to a double outage. Patching can easily take up to 30 minutes per cell assuming no errors occur. If an error occurs the outage could be even longer. Furthermore, if a cell is lost all 12 drives need to be reprotected which could take many hours or even days to rebuild, depending on the drive type and amount of data stored. Your options are to create a second mirrors (double mirroring) which is very costly or leverage Dataguard which would require you to fail over to your DR site.
  7. No raid 5,6 or 10 options. Since mirroring is your only option it can be very expensive. Oracle’s response is usually to emphasize you can use columnar compression for increased compression ratios, but columnar data stores are only applicable to warehousing and analytic workloads, and don’t lend themselves to transactional work loads.
  8. Dataguard is the only option for replication.
  9. RMan is the only option for backing up.
  10. No Hardware options for snapshots or clones.
  11. SQL offloading, the magic sauce of Exadata, is only applicable to data warehousing and analytic style queries. Most OLTP/transactional workloads don’t benefit from it, so you won’t see the exponential gains in performance.
  12. A single full rack is limited to 168 drives (12 drives per storage cell, and 14 cells in a full rack)
  13. The ratio of drives to cores in a storage cell is 1-1. Oracle touts this as a feature, but in reality, a single core can drive a lot more than a single drive, which is a waste of money. They need the drive ratio for SQL offloading, but as already mentioned, there are limited workloads that actually perform SQL offloading.
  14. It’s an Oracle only solution, locking you into to just their supported products, software and vision.

 

Why Vblock

  1. Like Exadata, it’s a Pre built, Pre configured, Fully integrated and Certified offering, with a single 1-800 number to call for service.
  2. Unlike Exadata, you aren’t limited to what applications you run, including; Oracle Database, Oracle Weblogic, other Oracle apps, as well as non Oracle apps such as Websphere, Tomcat, Apache, MS Exchange, MS Sharepoint, Windows, Linux, and other software.
  3. Vblock is a full 5 nines solution, offering the industries leading storage, networking and compute tiers to ensure you the highest levels of availability and data protection and security.
  4. Run any version of Oracle database, not just 11g.
  5. Run RAC or non RAC single instance databases.
  6. Mix and match virtualization on some nodes and physical on other nodes, giving you the best of both worlds. If you aren’t comfortable virtualizing Oracle in production, but are willing to Virtualize everything else, why have two different environments. On Vblock you can do both from a single platform.
  7. With an EMC VNX or VMAX storage array powering the Vblock, you aren’t limited to just mirroring. You can choose Raid 0,1,5,6 and 10 configuration. Choose what’s right for you, and even mix and match across different drives.
  8. Leverage EMC FAST VP and FAST Cache to enable automatic storage tiering of your data across Enterprise Flash Drives, SAS/FC and NL-SAS/SATA. The intelligence behind FAST will dynamically move data based on real usage patterns, not based on date ranges or convoluted business logic. It’s the best way in the industry to optimize performance while minimizing cost, and it’s all automatic.
  9. Provides writeable flash storage in both persistent and cache modes, achieving improved write performance and efficiencies.
  10. Leverage best of breed replication solutions such as SRDF, Recoverpoint, or Dataguard. With SRDF and Recoverpoint you aren’t limited to just the database files, so you can use a single replication strategy for all your DR needs.
  11. Choose from best of breed backup options, including Avamar, Data Domain or RMan. Choose what’s right for your business and don’t be locked in to a single choice.
  12. Leverage HW snaps and clones to provide the highest efficiencies of offloading storage tasks to the storage array where they run best. Also the use of snaps stores only the delta changes further saving disk space.
  13. Vblock has similar performance features to Exadata, including Enterprise Flash Drives, Westmere Processors, 10 GbE Ethernet (load balanced across 4 per node for 40 GbE throughput). Runs Oracle 11g, and supports 11g advanced features such as Advanced Compression. Coupled with EMC’s FAST and Power Path to optimize storage tiering you have an extremely fast and balanced configuration.
  14. When SQL offloading is required for special data warehousing and analytics, look at Greenplum as an alternative. Greenplum’s Chorus is a virtual node that can run within the Vblock, or you can use Hadoop to automate the aggregation and streaming of data out of your OLTP database on Vblock into a Greenplum appliance.
  15. Scale a single Vblock 700 to nearly 2000 drives.
  16. Vblock is a best of breed solution, developed by a coalition that includes VMware, Cisco, EMC and Intel. All four companies are leaders in their respective space and provide the only best of breed solution as a single integrated and supported product, not just a reference architecture.

    http://www.drvcloud.com/myblog/?p=126 
Posted by [TheWon]
Storage2011. 9. 30. 17:05

Unix

Bonnie++ - NEW!

Greatly improved disk I/O benchmark based on the code for Bonnie by Tim Bray. Bonnie++, rewritten in C++ by Russell Coker (russell@coker.com.au), adds the facility to test more than 2Gb of storage on 32-bit machines, and tests for file creat(), stat(), unlink() operations.

Bonnie v.2.0.6

One of the best "open systems" benchmarks available. It can be compiled under different UNIX flavors. We have successfully run it under SCO UNIX, Linux, Solaris, and BSDI. It should also compile with minimal changes under other UNIXes.

IOzone - UPDATED

Another great benchmark, in our opinion. The benchmark generates and measures a variety of file operations. IOzone is useful for performing broad filesystem analysis of a vendor's computer platform. It can also be used for NFS performance testing and the latest version now supports cluster testing as well. IOzone had been ported to numerous UNIX OSes and Windows NT. Read more about it and download the source by clicking on the link above.

Xbench - NEW!

Xbench is a a comprehensive benchmarking solution for Mac OS X. In addition to various system component benchmarks, it provides performance testing capabilities for sequential and random uncached disk I/O performance.

IOstone v. C/II

IOstone is a multi-platform disk, file I/O and buffer cache efficiencies benchmark program. This benchmark was originally created for UNIX and later ported to OS/2 and DOS. Included in this archive are the OS/2 and DOS executables with source code.

Disktest

Disktest allows any direct access device (and some sequential devices) to be tested while UNIX is still available to other users. Care should be taken to make the device which is to be tested unavailable to other system users.

IOBENCH

Excellent I/O throughput and fixed workload benchmark for UNIX. There are two versions of IOBENCH contained in sub-directories of this directory. These two versions contain the same source files and differ only in the scripts that drive the benchmark. Directory 073.iobench contains the throughput variant; directory 084.iobenchpf contains the fixed workload variant. This is the SPEC 2.6 distribution of IOBENCH.

IOCALL

IOCALL measures OS performance, in fact that is nearly all it measures. It also concentrates on system call interface efficiency, and especially read() system call performance, the most used system call on most systems. IOCALL is intended for UNIX systems only.

RawIO - NEW

Low-level raw device benchmarking program written by Greg Lehey, the author of the book "The Complete FreeBSD", published by Walnut Creek. This benchmark will only compile under BSD as it uses BSD specific mmap() call. Port to Linux is currently in progress.

PostMark - NEW

A new benchmark to measure performance of e-mail, netnews, and e-commerce classes of applications. PostMark was created to simulate heavy small-file system loads with a minimal amount of software and configuration effort and to provide complete reproducibility. It can be compiled under Solaris, Digital Unix, and Win32 environments. Learn more about the PostMark and download the source code by following the link above.

 
Windows

 

Windows 2000/NT/XP

Nbench

A very nice little benchmarking program for Windows NT. Nbench reports the following components of performance:

  • CPU speed: integer and floating operations/sec
  • L1 and L2 cache speeds: MB/sec
  • main memory speed: MB/sec
  • disk read and write speeds: MB/sec

SMP systems and multi-tasking OS efficiency can be tested using up to 20 separate threads of execution.

NTiogen 1.03 - UPDATED

NTiogen benchmark was written by Symbios Logic. It's Windows NT port of their popular UNIX benchmark IOGEN. NTIOGEN is the parent processes that spawns the specified number of IOGEN processes that actually do the I/O.

The program will display as output the number of processes, the average response time, the number of I/O operations per second, and the number of KBytes per second.

IOmeter - UPDATED

This benchmark was originally written by Intel. Intel has discontinued the development and released the source code into public domain. SOURCEFORGE.NET is currently hosting this project. IOMETER is a disk I/O subsystem measurement and characterization tool for single and clustered systems. Iometer does for a computer's I/O subsystem what a dynamometer does for an engine: it measures performance under a controlled load. It is now available for Windows 2000, Linux, and Solaris.

Bench32

A very comprehensive benchmark that measures overall system performance under Windows NT or Windows 95. Unfortunately, the company that wrote this benchmarking program seems to have gone out of business. You can find a local copy of Bench32 v.1.21 here.

ThreadMark

A very popular benchmark written by Adaptec. Adaptec's decided that they cannot support it anymore and thus they have removed this benchmark from their web site. Click on the link to download ThreadMark from our server.

HDTach v.2.61

Shareware/commercial disk I/O benchmark. HD Tach is a physical performance hard drive test for Windows 95/98 and Windows NT. In Windows 95/98 it uses a special kernel mode VXD to get maximum accuracy by bypassing the file system. A similar mechanism is used in Windows NT. HD Tach reads from areas all over the hard drive and reports its average speed. It also logs the read speeds to a text file that you can load into a spreadsheet and graph to visually read the results of the test.

Posted by [TheWon]
HW/OS2011. 9. 30. 10:57
Description:            Power/Cooling subsystem Unrecovered Error, bypassed
                        with loss of redundancy. Refer to the system service
                        documentation for more information.
Additional Words:       2-003C0002 3-00000000 4-00000000 5-00000000
                        6-00000000 7-00000000 8-00000000 9-00000000
Possible FRUs:
    Priority: L FRU: 44V2965  S/N: YL182720806K CCIN: 2A05 
    Location: U789D.001.DQDYHVV-P2-C4
    Priority: L FRU: 10N9353  S/N: YL10D7214193 CCIN: 294E 
Posted by [TheWon]
HW/OS2011. 9. 24. 09:55
Posted by [TheWon]
Storage2011. 9. 23. 14:33

  • sysconfig -a : shows hardware configuration with more verbose information
  • sysconfig -d : shows information of the disk attached to the filer
  • version : shows the netapp Ontap OS version.
  • uptime : shows the filer uptime
  • dns info : this shows the dns resolvers, the no of hits and misses and other info
  • nis info : this shows the nis domain name, yp servers etc.
  • rdfile : Like "cat" in Linux, used to read contents of text files/
  • wrfile : Creates/Overwrites a file. Similar to "cat > filename" in Linux
  • aggr status : Shows the aggregate status
  • aggr status -r : Shows the raid configuration, reconstruction information of the disks in filer
  • aggr show_space : Shows the disk usage of the aggreate, WAFL reserve, overheads etc.
  • vol status : Shows the volume information
  • vol status -s : Displays the spare disks on the filer
  • vol status -f : Displays the failed disks on the filer
  • vol status -r : Shows the raid configuration, reconstruction information of the disks
  • df -h : Displays volume disk usage
  • df -i : Shows the inode counts of all the volumes
  • df -Ah : Shows "df" information of the aggregate
  • license : Displays/add/removes license on a netapp filer
  • maxfiles : Displays and adds more inodes to a volume
  • aggr create : Creates aggregate
  • vol create <volname> <aggrname> <size> : Creates volume in an aggregate
  • vol offline <volname> : Offlines a volume
  • vol online <volname> : Onlines a volume
  • vol destroy <volname> : Destroys and removes an volume
  • vol size <volname> [+|-]<size> : Resize a volume in netapp filer
  • vol options : Displays/Changes volume options in a netapp filer
  • qtree create <qtree-path> : Creates qtree
  • qtree status : Displays the status of qtrees
  • quota on : Enables quota on a netapp filer
  • quota off : Disables quota
  • quota resize : Resizes quota
  • quota report : Reports the quota and usage
  • snap list : Displays all snapshots on a volume
  • snap create <volname> <snapname> : Create snapshot
  • snap sched <volname> <schedule> : Schedule snapshot creation
  • snap reserve <volname> <percentage> : Display/set snapshot reserve space in volume
  • /etc/exports : File that manages the NFS exports
  • rdfile /etc/exports : Read the NFS exports file
  • wrfile /etc/exports : Write to NFS exports file
  • exportfs -a : Exports all the filesystems listed in /etc/exports
  • cifs setup : Setup cifs
  • cifs shares : Create/displays cifs shares
  • cifs access : Changes access of cifs shares
  • lun create : Creates iscsi or fcp luns on a netapp filer
  • lun map : Maps lun to an igroup
  • lun show : Show all the luns on a filer
  • igroup create : Creates netapp igroup
  • lun stats : Show lun I/O statistics
  • disk show : Shows all the disk on the filer
  • disk zero spares : Zeros the spare disks
  • disk_fw_update : Upgrades the disk firmware on all disks
  • options : Display/Set options on netapp filer
  • options nfs : Display/Set NFS options
  • options timed : Display/Set NTP options on netapp.
  • options autosupport : Display/Set autosupport options
  • options cifs : Display/Set cifs options
  • options tcp : Display/Set TCP options
  • options net : Display/Set network options
  • ndmpcopy <src-path> <dst-path> : Initiates ndmpcopy
  • ndmpd status : Displays status of ndmpd
  • ndmpd killall : Terminates all the ndmpd processes.
  • ifconfig : Displays/Sets IP address on a network/vif interface
  • vif create : Creates a VIF (bonding/trunking/teaming)
  • vif status : Displays status of a vif
  • netstat : Displays network statistics
  • sysstat -us 1 : begins a 1 second sample of the filer's current utilization (crtl - c to end)
  • nfsstat : Shows nfs statistics
  • nfsstat -l : Displays nfs stats per client
  • nfs_hist : Displays nfs historgram
  • statit : beings/ends a performance workload sampling [-b starts / -e ends]
  • stats : Displays stats for every counter on netapp. Read stats man page for more info
  • ifstat : Displays Network interface stats
  • qtree stats : displays I/O stats of qtree
  • environment : display environment status on shelves and chassis of the filer
  • storage show <disk|shelf|adapter> : Shows storage component details
  • snapmirror intialize : Initialize a snapmirror relation
  • snapmirror update : Manually Update snapmirror relation
  • snapmirror resync : Resyns a broken snapmirror
  • snapmirror quiesce : Quiesces a snapmirror bond
  • snapmirror break : Breakes a snapmirror relation
  • snapmirror abort : Abort a running snapmirror
  • snapmirror status : Shows snapmirror status
  • lock status -h : Displays locks held by filer
  • sm_mon : Manage the locks
  • storage download shelf : Installs the shelf firmware
  • software get : Download the Netapp OS software
  • software install : Installs OS
  • download : Updates the installed OS
  • cf status : Displays cluster status
  • cf takeover : Takes over the cluster partner
  • cf giveback : Gives back control to the cluster partner
  • reboot : Reboots a filer

    출처 : http://unixfoo.blogspot.com 
  • Posted by [TheWon]