Skip to content

960 EVO M.2 vs. 970 PRO M.2

ROCKPro64
2 1 1.7k
  • Hardware

    • 960 EVO NVMe M.2 250GB
    • 970 PRO NVMe M.2 500GB

    Software Linux 4.18

    rock64@rockpro64v2_0:/mnt$ uname -a
    Linux rockpro64v2_0 4.18.0-rc5-1050-ayufan-ge70bd2ab8802 #1 SMP PREEMPT Thu Jul 26 08:33:14 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
    

    EVO

    iozone

    rock64@rockpro64v2_0:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 
    	Iozone: Performance Test of File I/O
    	        Version $Revision: 3.429 $
    		Compiled for 64 bit mode.
    		Build: linux 
    
    	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
    	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
    	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
    	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
    	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
    	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
    	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
    	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.
    
    	Run began: Sat Jul 28 11:59:54 2018
    
    	Include fsync in write timing
    	O_DIRECT feature enabled
    	Auto Mode
    	File size set to 102400 kB
    	Record Size 4 kB
    	Record Size 16 kB
    	Record Size 512 kB
    	Record Size 1024 kB
    	Record Size 16384 kB
    	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    	Output is in kBytes/sec
    	Time Resolution = 0.000001 seconds.
    	Processor cache size set to 1024 kBytes.
    	Processor cache line size set to 32 bytes.
    	File stride size set to 17 * record size.
                                                                  random    random     bkwd    record    stride                                    
                  kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
              102400       4    78392   146717   161310   163664    54188   142760                                                          
              102400      16   272030   416470   446603   451929   198784   410356                                                          
              102400     512  1032819  1054756  1010756  1039591   839020  1054094                                                          
              102400    1024  1075290  1124016  1026463  1056224   942848  1126785                                                          
              102400   16384   911810  1391243  1419922  1476347  1459080  1375922                                                          
    
    iozone test complete.
    

    dd

    rock64@rockpro64v2_0:/mnt$ sudo dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.78991 s, 385 MB/s
    rock64@rockpro64v2_0:/mnt$ sudo echo 3 | sudo tee /proc/sys/vm/drop_caches 
    3
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.29534 s, 829 MB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.911316 s, 1.2 GB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.532248 s, 2.0 GB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.714217 s, 1.5 GB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.528779 s, 2.0 GB/s
    rock64@rockpro64v2_0:/mnt$ 
    

    PRO

    iozone

     rock64@rockpro64v2_0:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 
     	Iozone: Performance Test of File I/O
     	        Version $Revision: 3.429 $
     		Compiled for 64 bit mode.
     		Build: linux 
     
     	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
     	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
     	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
     	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
     	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
     	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
     	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
     	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.
     
     	Run began: Sat Jul 28 12:08:50 2018
     
     	Include fsync in write timing
     	O_DIRECT feature enabled
     	Auto Mode
     	File size set to 102400 kB
     	Record Size 4 kB
     	Record Size 16 kB
     	Record Size 512 kB
     	Record Size 1024 kB
     	Record Size 16384 kB
     	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
     	Output is in kBytes/sec
     	Time Resolution = 0.000001 seconds.
     	Processor cache size set to 1024 kBytes.
     	Processor cache line size set to 32 bytes.
     	File stride size set to 17 * record size.
                                                                   random    random     bkwd    record    stride                                    
                   kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
               102400       4    83920   146526   171217   172733    56965   145921                                                          
               102400      16   271229   414900   454454   460018   193626   413496                                                          
               102400     512  1021580  1033256  1007794  1057973   990788  1075201                                                          
               102400    1024  1066333  1107758  1038792  1079089  1048932  1116344                                                          
               102400   16384   918513  1418530  1433672  1529740  1523500  1389826                                                          
     
     iozone test complete.
    

    dd

    rock64@rockpro64v2_0:/mnt$ sudo dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.76911 s, 607 MB/s
    rock64@rockpro64v2_0:/mnt$ echo 3 | sudo tee /proc/sys/vm/drop_caches 
    3
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.71439 s, 626 MB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.574552 s, 1.9 GB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.724723 s, 1.5 GB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.70586 s, 1.5 GB/s
    rock64@rockpro64v2_0:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.512834 s, 2.1 GB/s
    rock64@rockpro64v2_0:/mnt$
    

    Software Linux 4.4.132

    rock64@rockpro64v2_1:/mnt$ uname -a
    Linux rockpro64v2_1 4.4.132-1075-rockchip-ayufan-ga83beded8524 #1 SMP Thu Jul 26 08:22:22 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
    

    EVO

    iozone

    rock64@rockpro64v2_1:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 
    	Iozone: Performance Test of File I/O
    	        Version $Revision: 3.429 $
    		Compiled for 64 bit mode.
    		Build: linux 
    
    	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
    	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
    	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
    	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
    	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
    	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
    	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
    	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.
    
    	Run began: Sat Jul 28 12:35:25 2018
    
    	Include fsync in write timing
    	O_DIRECT feature enabled
    	Auto Mode
    	File size set to 102400 kB
    	Record Size 4 kB
    	Record Size 16 kB
    	Record Size 512 kB
    	Record Size 1024 kB
    	Record Size 16384 kB
    	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    	Output is in kBytes/sec
    	Time Resolution = 0.000001 seconds.
    	Processor cache size set to 1024 kBytes.
    	Processor cache line size set to 32 bytes.
    	File stride size set to 17 * record size.
                                                                  random    random     bkwd    record    stride                                    
                  kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
              102400       4    39260    84776   108205   107834    32124    72701                                                          
              102400      16   120563   233999   269123   273692   117401   207395                                                          
              102400     512   643522   575756   455850   462362   416099   548623                                                          
              102400    1024   522939   613743   484305   491560   463470   617078                                                          
              102400   16384  1085393  1168020  1064472  1089797  1088203  1123589                                                          
    
    iozone test complete.
    

    dd

    rock64@rockpro64v2_1:/mnt$ sudo dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.26845 s, 473 MB/s
    rock64@rockpro64v2_1:/mnt$ sudo echo 3 | sudo tee /proc/sys/vm/drop_caches 
    3
    rock64@rockpro64v2_1:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.53688 s, 699 MB/s
    rock64@rockpro64v2_1:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.743431 s, 1.4 GB/s
    rock64@rockpro64v2_1:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.686147 s, 1.6 GB/s
    rock64@rockpro64v2_1:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.638274 s, 1.7 GB/s
    rock64@rockpro64v2_1:/mnt$ sudo dd if=tempfile of=/dev/null bs=1M count=1024 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.672767 s, 1.6 GB/s
    rock64@rockpro64v2_1:/mnt$ 
    

    Das sieht im iozone schon wesentlich schlechter aus, so das ich mir den Rest erspare.

  • Die 970 steckt jetzt in meinem Haupt-PC. Dort werkelt ein aktuelles Linux Mint Cinnamon 19. Zum Vergleich.

    100M

    frank@frank-MS-7A34:~$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 
    [sudo] Passwort für frank: 
        Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
            Compiled for 64 bit mode.
            Build: linux-AMD64 
    
        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                   Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                    Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                   Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                   Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                  Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                    Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                   Vangel Bojaxhi, Ben England, Vikentsi Lapa.
    
        Run began: Sun Aug 19 16:52:19 2018
    
        Include fsync in write timing
        O_DIRECT feature enabled
        Auto Mode
        File size set to 102400 kB
        Record Size 4 kB
        Record Size 16 kB
        Record Size 512 kB
        Record Size 1024 kB
        Record Size 16384 kB
        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                                 random    random     bkwd    record    stride                                    
                kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             102400       4    92640   121912   131074   139525    45719   116653                                                          
             102400      16   254286   285267   285539   320370   108049   314486                                                          
             102400     512   537947   581765   606103   598137   537701   588214                                                          
             102400    1024   566892   547921   567369   597286   518014   558686                                                          
             102400   16384  1407884  1642148  1941120  2115608  2006947  1668118                                                          
    
    iozone test complete.
    

    1000M

    frank@frank-MS-7A34:~$ sudo iozone -e -I -a -s 1000M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 
    	Iozone: Performance Test of File I/O
    	        Version $Revision: 3.429 $
    		Compiled for 64 bit mode.
    		Build: linux-AMD64 
    
    	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
    	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
    	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
    	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
    	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
    	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
    	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
    	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.
    
    	Run began: Sun Aug 19 15:28:38 2018
    
    	Include fsync in write timing
    	O_DIRECT feature enabled
    	Auto Mode
    	File size set to 1024000 kB
    	Record Size 4 kB
    	Record Size 16 kB
    	Record Size 512 kB
    	Record Size 1024 kB
    	Record Size 16384 kB
    	Command line used: iozone -e -I -a -s 1000M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    	Output is in kBytes/sec
    	Time Resolution = 0.000001 seconds.
    	Processor cache size set to 1024 kBytes.
    	Processor cache line size set to 32 bytes.
    	File stride size set to 17 * record size.
                                                                  random    random     bkwd    record    stride                                    
                  kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             1024000       4    95635   121379   108328   108265    45369   123356                                                          
             1024000      16   239238   314359   245937   241877   105865   297193                                                          
             1024000     512   596812   620661   442100   382367   351948   613525                                                          
             1024000    1024   608903   611898   434687   417192   412018   646465                                                          
             1024000   16384  1898738  2004622  2143647  2188062  2099674  1983240                                                          
    
    iozone test complete.
    

    Da scheint auf dem ROCKPro64 noch ein wenig Luft nach oben.

  • ROCKPro64 - Debian Bullseye Teil 2

    Verschoben ROCKPro64 debian linux rockpro64
    3
    0 Stimmen
    3 Beiträge
    527 Aufrufe
    FrankMF
    Gestern mal das Ganze mit einem Cinnamon Desktop ausprobiert. Eine verschlüsselte Installation auf eine PCIe NVMe SSD. So weit lief das alles reibungslos. Der Cinnamon Desktop hat dann leider keine 3D Unterstützung. Sieht so aus, als wenn keine vernünftigen Grafiktreiber genutzt würden. Da ich auf diesem Gebiet aber eine Null bin, lassen wir das mal so. Außerdem mag ich sowieso keine Desktops auf diesen kleinen SBC. Da fehlt mir einfach der Dampf Gut, was ist mir so aufgefallen? Unbedingt die Daten des Daily Images erneuern, keine alten Images nutzen. Ich hatte da jetzt ein paar Mal Schwierigkeiten mit. Da das ja nun keine Arbeit ist, vorher einfach neu runterladen und Image bauen. Warum zum Henker bootet eigentlich. außer meiner Samsung T5, nichts vom USB3 oder USB-C Port??
  • ROCKPro64 - Anpassen resize_rootfs.sh

    Angeheftet ROCKPro64 rockpro64
    3
    0 Stimmen
    3 Beiträge
    555 Aufrufe
    FrankMF
    Seit Release 0.10.10 ist das automatische Vergrößern der Root Partition mit drin 0.10.10: Support automated resize when booting from nvme Einfach das Image auf die NVMe SSD schreiben, ab in den ROCKPro64 und fertig! Nach dem Booten wird die Partition dann automatisch auf die maximal mögliche Größe erweitert. Kamil hat das Script auch ein wenig angepasst. case $dev in /dev/mmcblk?p?) DISK=${dev:0:12} PART=${dev:13} NAME="sd/emmc" ;; /dev/sd??) DISK=${dev:0:8} PART=${dev:8} NAME="hdd/ssd" ;; /dev/nvme?n?p?) DISK=${dev:0:12} PART=${dev:13} NAME="pcie/nvme" ;; Das Resultat bei einer Samsung 979 EVO mit 500GB Speicher rock64@rockpro64:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 918M 0 918M 0% /dev tmpfs 192M 5.2M 187M 3% /run /dev/nvme0n1p4 459G 1.2G 439G 1% / tmpfs 957M 0 957M 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 957M 0 957M 0% /sys/fs/cgroup /dev/nvme0n1p3 229M 44M 169M 21% /boot /dev/nvme0n1p2 12M 0 12M 0% /boot/efi tmpfs 192M 0 192M 0% /run/user/1000 Perfekt. Danke Kamil!
  • ROCKPro64 Übersicht - was geht?

    ROCKPro64 rockpro64
    4
    0 Stimmen
    4 Beiträge
    672 Aufrufe
    FrankMF
    WIFI Seit dem Release des Images 0.7.13 ist WiFi auch möglich. Weiterhin ungelöst ist das Problem PCIe & WiFi (also bei mir).
  • ROCKPro64 Armbian Image - erster Test

    Verschoben Armbian armbian rockpro64
    13
    1
    0 Stimmen
    13 Beiträge
    2k Aufrufe
    FrankMF
    Erster dicker Fehlschlag mit Armbian Heute versucht mein NAS mit Armbian aufzusetzen. Raid einbinden usw. kein Problem. Als es dann an Restic und GO ging war es vorbei mit lustig. Pakete zu alt, Quellen eingebunden und nur noch Fehler. Hmm!? Da ich nach zwei Stunden keine Lust mehr hatte, habe ich das erst mal auf Eis gelegt. Manchmal ist es besser an einem anderen Tag noch mal von vorne anzufangen. Nun läuft das NAS wieder mit rock64@rockpro64v_2_1:~$ uname -a Linux rockpro64v_2_1 4.19.0-rc4-1071-ayufan-g10a63ec6c2a2 #1 SMP PREEMPT Mon Oct 1 07:33:40 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux So schlecht läuft das ja nicht, wenn denn mal die USB3 Schnittstelle vernünftig laufen würde. Update: Manchmal muss man es auch richtig machen https://forum.frank-mankel.org/topic/420/rockpro64-armbian-go-restic-installieren
  • Kernel updaten NVMe / SDCard

    Verschoben ROCKPro64 rockpro64
    1
    1
    0 Stimmen
    1 Beiträge
    917 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 - PCIe SATA Karte

    Verschoben Hardware hardware rockpro64
    13
    1
    0 Stimmen
    13 Beiträge
    4k Aufrufe
    FrankMF
    @elRadix : With pine64 sata-card you can use two hdd's. https://www.pine64.org/?product=rockpro64-pci-e-to-dual-sata-ii-interface-card For working cards please look into this thread before you buy anything.
  • stretch-minimal-rockpro64

    Verschoben Linux rockpro64
    3
    0 Stimmen
    3 Beiträge
    1k Aufrufe
    FrankMF
    Mal ein Test was der Speicher so kann. rock64@rockpro64:~/tinymembench$ ./tinymembench tinymembench v0.4.9 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 2812.7 MB/s C copy backwards (32 byte blocks) : 2811.9 MB/s C copy backwards (64 byte blocks) : 2632.8 MB/s C copy : 2667.2 MB/s C copy prefetched (32 bytes step) : 2633.5 MB/s C copy prefetched (64 bytes step) : 2640.8 MB/s C 2-pass copy : 2509.8 MB/s C 2-pass copy prefetched (32 bytes step) : 2431.6 MB/s C 2-pass copy prefetched (64 bytes step) : 2424.1 MB/s C fill : 4887.7 MB/s (0.5%) C fill (shuffle within 16 byte blocks) : 4883.0 MB/s C fill (shuffle within 32 byte blocks) : 4889.3 MB/s C fill (shuffle within 64 byte blocks) : 4889.2 MB/s --- standard memcpy : 2807.3 MB/s standard memset : 4890.4 MB/s (0.3%) --- NEON LDP/STP copy : 2803.7 MB/s NEON LDP/STP copy pldl2strm (32 bytes step) : 2802.1 MB/s NEON LDP/STP copy pldl2strm (64 bytes step) : 2800.7 MB/s NEON LDP/STP copy pldl1keep (32 bytes step) : 2745.5 MB/s NEON LDP/STP copy pldl1keep (64 bytes step) : 2745.8 MB/s NEON LD1/ST1 copy : 2801.9 MB/s NEON STP fill : 4888.9 MB/s (0.3%) NEON STNP fill : 4850.1 MB/s ARM LDP/STP copy : 2803.8 MB/s ARM STP fill : 4893.0 MB/s (0.5%) ARM STNP fill : 4851.7 MB/s ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 602.5 MB/s NEON LDP/STP 2-pass copy (from framebuffer) : 551.6 MB/s NEON LD1/ST1 copy (from framebuffer) : 667.1 MB/s NEON LD1/ST1 2-pass copy (from framebuffer) : 605.6 MB/s ARM LDP/STP copy (from framebuffer) : 445.3 MB/s ARM LDP/STP 2-pass copy (from framebuffer) : 428.8 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 4.5 ns / 7.2 ns 131072 : 6.8 ns / 9.7 ns 262144 : 9.8 ns / 12.8 ns 524288 : 11.4 ns / 14.7 ns 1048576 : 16.0 ns / 22.6 ns 2097152 : 114.0 ns / 175.3 ns 4194304 : 161.7 ns / 219.9 ns 8388608 : 190.7 ns / 241.5 ns 16777216 : 205.3 ns / 250.5 ns 33554432 : 212.9 ns / 255.5 ns 67108864 : 222.3 ns / 271.1 ns
  • ROCKPro64 - PCIe x4

    Verschoben Hardware hardware rockpro64
    13
    6
    0 Stimmen
    13 Beiträge
    5k Aufrufe
    FrankMF
    @Northstar Hallo, laut meinen Info's nicht, hat irgendwas mit der Speicheradressierung zu tuen. Und Grafikkarten benötigen wohl zu viel. Das ist das, was ich bei den vielen Diskussionen im IRC so aufgeschnappt habe. Ich habe es auch schon mal genauso probiert - natürlich ohne Erfolg.