Skip to content

ROCKPro64 - Booten von USB3

ROCKPro64
  • Heute mal wieder ein kurzer Test.

    Hardware

    • USB3 SSD Samsung T5 500GB
    • PCIe NVMe SSD Samsung 960 EVO mit 250GB

    Software

    • uboot ins SPI geschrieben uboot 2017.09.....1062

    • Auf der SSD ist das bionic-minimal-rockpro64-0.9.0-1142-arm64.img

      rock64@rockpro64:~$ uname -a
      Linux rockpro64 4.4.184-1220-rockchip-ayufan-g5fe46b4c9a4a #1 SMP Sun Jul 7 13:45:25 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux
      

    Ein Start ist problemlos möglich.

    df

    rock64@rockpro64:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev            991M     0  991M   0% /dev
    tmpfs           199M  568K  199M   1% /run
    /dev/sda7       459G  1.3G  439G   1% /
    tmpfs           995M     0  995M   0% /dev/shm
    tmpfs           5.0M  4.0K  5.0M   1% /run/lock
    tmpfs           995M     0  995M   0% /sys/fs/cgroup
    /dev/sda6       112M  4.0K  112M   1% /boot/efi
    tmpfs           199M     0  199M   0% /run/user/1000
    

    dd

    rock64@rockpro64:~$ sudo dd if=/dev/zero of=sd.img bs=1M count=4096 conv=fdatasync
    [sudo] password for rock64: 
    4096+0 records in
    4096+0 records out
    4294967296 bytes (4.3 GB, 4.0 GiB) copied, 12.3658 s, 347 MB/s
    

    iozone

    rock64@rockpro64:/$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 
     	Iozone: Performance Test of File I/O
     	        Version $Revision: 3.429 $
     		Compiled for 64 bit mode.
     		Build: linux 
     
     	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
     	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
     	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
     	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
     	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
     	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
     	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
     	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.
     
     	Run began: Tue Jul 16 17:19:46 2019
     
     	Include fsync in write timing
     	O_DIRECT feature enabled
     	Auto Mode
     	File size set to 102400 kB
     	Record Size 4 kB
     	Record Size 16 kB
     	Record Size 512 kB
     	Record Size 1024 kB
     	Record Size 16384 kB
     	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
     	Output is in kBytes/sec
     	Time Resolution = 0.000001 seconds.
     	Processor cache size set to 1024 kBytes.
     	Processor cache line size set to 32 bytes.
     	File stride size set to 17 * record size.
                                                                   random    random     bkwd    record    stride                                    
                   kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
               102400       4    22066    22866    32054    31620    22357    23561                                                          
               102400      16    65752    58709    61745    79903    61869    89612                                                          
               102400     512   233974   254912   229548   230554   223731   255448                                                          
               102400    1024   288587   308089   275765   277139   268987   307692                                                          
               102400   16384   402823   413363   389810   392131   392335   414199                                                          
     
     iozone test complete.
    

    blkid

    rock64@rockpro64:~$ sudo blkid
    /dev/nvme0n1: LABEL="TEST" UUID="962851a6-b0c8-4fe1-a7eb-1f1a68a120bb" TYPE="ext4"
    /dev/sda1: PARTLABEL="loader1" PARTUUID="552aa722-be77-486f-abfb-f3649606441d"
    /dev/sda2: PARTLABEL="reserved1" PARTUUID="8329ccf3-cfdd-44b8-b654-8bac4d118d18"
    /dev/sda3: PARTLABEL="reserved2" PARTUUID="f81a3d66-7c6c-4605-ab47-7b76ded05d72"
    /dev/sda4: PARTLABEL="loader2" PARTUUID="95fd176f-85f7-4672-8a17-bd034bd9a13a"
    /dev/sda5: PARTLABEL="atf" PARTUUID="2af9051f-1b48-49b9-b326-c11aeed5de35"
    /dev/sda6: SEC_TYPE="msdos" LABEL="boot" UUID="4000-6196" TYPE="vfat" PARTLABEL="boot" PARTUUID="c9c6f3cb-6fd4-469e-952b-2f5e8ee62925"
    /dev/sda7: LABEL="linux-root" UUID="4e124868-d83a-463e-b7ab-68cc7c55cc23" TYPE="ext4" PARTLABEL="root" PARTUUID="84865f57-6395-48fa-b2c3-a3f71fd246e4"
    /dev/zram0: UUID="660548f9-6151-4fd6-a0f0-3980b1f56a54" TYPE="swap"
    /dev/zram1: UUID="25f9ca0a-99f6-408a-a8b7-06e7f80c715d" TYPE="swap"
    /dev/zram2: UUID="f25ef32b-d15d-4100-b544-422aae1be00e" TYPE="swap"
    /dev/zram3: UUID="fb972e95-deb9-4c82-ba72-86a016c5f5a3" TYPE="swap"
    /dev/zram4: UUID="ec437d02-400a-4d17-9a6d-b9143a87e400" TYPE="swap"
    /dev/zram5: UUID="7ebb66d9-3e04-4698-96ea-7f08285cff70" TYPE="swap"
    

    Die NVMe SSD wird sauber angesprochen.

    rock64@rockpro64:/mnt$ sudo dd if=/dev/zero of=sd.img bs=1M count=4096 conv=fdatasync
    4096+0 records in
    4096+0 records out
    4294967296 bytes (4.3 GB, 4.0 GiB) copied, 11.7229 s, 366 MB/s
    

    und

    rock64@rockpro64:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 
    	Iozone: Performance Test of File I/O
    	        Version $Revision: 3.429 $
    		Compiled for 64 bit mode.
    		Build: linux 
    
    	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
    	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
    	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
    	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
    	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
    	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
    	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
    	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.
    
    	Run began: Tue Jul 16 17:18:45 2019
    
    	Include fsync in write timing
    	O_DIRECT feature enabled
    	Auto Mode
    	File size set to 102400 kB
    	Record Size 4 kB
    	Record Size 16 kB
    	Record Size 512 kB
    	Record Size 1024 kB
    	Record Size 16384 kB
    	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    	Output is in kBytes/sec
    	Time Resolution = 0.000001 seconds.
    	Processor cache size set to 1024 kBytes.
    	Processor cache line size set to 32 bytes.
    	File stride size set to 17 * record size.
                                                                  random    random     bkwd    record    stride                                    
                  kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
              102400       4    81283   110138    93170    95524    33148    73284                                                          
              102400      16   138311   209171   245732   249358   115781   176634                                                          
              102400     512   569812   578828   481322   487712   437363   591439                                                          
              102400    1024   577200   657983   505823   513736   483870   632656                                                          
              102400   16384   983078  1058890  1051166  1105247  1099474  1139649                                                          
    
    iozone test complete.
    

    ip a

     rock64@rockpro64:~$ ip a
     1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
         inet 127.0.0.1/8 scope host lo
            valid_lft forever preferred_lft forever
         inet6 ::1/128 scope host 
            valid_lft forever preferred_lft forever
     2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
         link/ether 62:03:b0:d6:dc:b3 brd ff:ff:ff:ff:ff:ff
         inet 192.168.3.19/24 brd 192.168.3.255 scope global dynamic eth0
            valid_lft 6411sec preferred_lft 6411sec
         inet6 fe80::6003:b0ff:fed6:dcb3/64 scope link 
            valid_lft forever preferred_lft forever
     3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state DORMANT group default qlen 1000
         link/ether ac:83:f3:e6:1f:b2 brd ff:ff:ff:ff:ff:ff
    

    WLan wird erkannt, hier nicht konfiguriert.

    Reboot

    Selten blieb er beim reboot hängen. Müsste mit der PCIe Karte zusammenhängen!? Klappt das Booten von SPI bei Euch da draußen jetzt eigentlich einwandfrei? Immer? Ich gestehe, ich habe da aktuell ein wenig den Überblick verloren, wie da der aktuelle Entwicklungsstand ist.

    Hier sieht es, in dieser Kombination, nutzbar aus.

  • Ich habe hier unterschiedliche Boot-Konfigs gerade am laufen:

    NVMe (root) / SD Karten boot

    emmc 64 GB (root sowie boot)

    USB3 SSD (root) / SD Karten boot

    alle mit:

    shutdown -r seit ayufan 0.8 kein reset / reboot problem

    nur mein "erster" rockpro (nmve root / sd karten boot) trotz ayufan 0.8
    will nicht ohne angeschlossenen monitor (standby) in den uboot nach einem shutdown .. ...

    Wie schön wäre es doch, wenn der SPI boot auch mit NVMe und USB3 problemlos funktioniert, dann könnte ich auf die SD-Kärtchen und Kernel Updates Workarounds verzichten. (für dich vermutlich auch eine entlastung 🙂 )

  • Yeah, genau das worauf ich auch warte.

    Wenn ich das richtig mitbekommen habe, könnte das Kamil's nächster Punkt auf seiner Liste sein.

  • ROCKPro64 - DKMS im Release RC12 möglich

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    253 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 - Das erste Mal

    Angeheftet Verschoben Hardware
    5
    1 Stimmen
    5 Beiträge
    869 Aufrufe
    FrankMF

    Ich kann heute die Fragen aller Fragen beantworten 🙂

    Damit ist leider die Frage immer noch unbeantwortet ob WLan und PCIe zusammen nutzbar ist!! Es geht!!

    Ich habe von MrFixit ein Testimage der RecalBox, benutzt das selbe Debian wie oben. Die Tage konnte man im IRC verfolgen, wie man dem Grundproblem näher kam und wohl einen Fix gebastelt hat, damit beides zusammen funktioniert. Mr.Fixit hat das in RecalBox eingebaut und ich durfte testen.

    # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP8000> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 62:03:b0:d6:dc:b3 brd ff:ff:ff:ff:ff:ff 3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP8000> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether ac:83:f3:e6:1f:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.178.27/24 brd 192.168.178.255 scope global wlan0 valid_lft forever preferred_lft forever inet6 2a02:908:1262:4680:ae83:f3ff:fee6:1fb2/64 scope global dynamic valid_lft 7145sec preferred_lft 3545sec inet6 fe80::ae83:f3ff:fee6:1fb2/64 scope link valid_lft forever preferred_lft forever # ls /mnt bin etc media recalbox sd.img test2.img boot home mnt root selinux tmp crypthome lib opt run srv usr dev lost+found proc sbin sys var # fdisk BusyBox v1.27.2 (2019-02-01 22:43:19 EST) multi-call binary. Usage: fdisk [-ul] [-C CYLINDERS] [-H HEADS] [-S SECTORS] [-b SSZ] DISK Change partition table -u Start and End are in sectors (instead of cylinders) -l Show partition table for each DISK, then exit -b 2048 (for certain MO disks) use 2048-byte sectors -C CYLINDERS Set number of cylinders/heads/sectors -H HEADS Typically 255 -S SECTORS Typically 63 # fdisk -l Disk /dev/mmcblk0: 15 GB, 15931539456 bytes, 31116288 sectors 486192 cylinders, 4 heads, 16 sectors/track Units: cylinders of 64 * 512 = 32768 bytes Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type /dev/mmcblk0p1 * 2,10,9 10,50,40 32768 163839 131072 64.0M c Win95 FAT32 (LBA) Partition 1 does not end on cylinder boundary /dev/mmcblk0p2 * 16,81,2 277,102,17 262144 4456447 4194304 2048M 83 Linux Partition 2 does not end on cylinder boundary /dev/mmcblk0p3 277,102,18 1023,254,63 4456448 31115263 26658816 12.7G 83 Linux Partition 3 does not end on cylinder boundary Disk /dev/nvme0n1: 233 GB, 250059350016 bytes, 488397168 sectors 2543735 cylinders, 12 heads, 16 sectors/track Units: cylinders of 192 * 512 = 98304 bytes Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type /dev/nvme0n1p1 1,0,1 907,11,16 2048 488397167 488395120 232G 83 Linux #

    Oben sieht man eine funktionierende WLan-Verbindung, das LAN-Kabel war entfernt. Unten sieht man die PCIe NVMe SSD, gemountet nach /mnt und Inhaltsausgabe.

    Das sollte beweisen, das der Ansatz der Lösung funktioniert. Leider kann ich nicht sagen, das es zum jetzigen Zeitpunkt stabil läuft. Ich habe einfach so Reboots, kann den Fehler aktuell aber nicht fangen. Mal sehen ob ich noch was finde.

    Aber, es ist ein Anfang!

  • Neues Script "change-default-kernel.sh "

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    592 Aufrufe
    Niemand hat geantwortet
  • Tehuti Networks Ltd. TN9710P 10GBase-T/NBASE-T Ethernet Adapter

    Hardware
    2
    0 Stimmen
    2 Beiträge
    1k Aufrufe
    FrankMF

    This repo contains the tn40xx Linux driver for 10Gbit NICs based on the TN4010 MAC from Tehuti Networks.

    This driver enables the following 10Gb SFP+ NICs:

    D-Link DXE-810S
    Edimax EN-9320SFP+
    StarTech PEX10000SFP
    Synology E10G15-F1
    ... as well as the following 10GBase-T/NBASE-T NICs:

    D-Link DXE-810T
    Edimax EN-9320TX-E
    EXSYS EX-6061-2
    Intellinet 507950
    StarTech ST10GSPEXNB

    Quelle: https://github.com/ayufan-rock64/tn40xx-driver/tree/master

  • SATA Karte Marvell 88SE9230 Chipsatz

    Angeheftet Hardware
    19
    0 Stimmen
    19 Beiträge
    6k Aufrufe
    FrankMF

    Ok, es gibt noch eine andere Möglichkeit.

    Kamil hat mir noch ein wenig geholfen. Mit folgender Änderung werden die Platten gefunden.

    hmm, I had to add /etc/default/extlinux: libahci.skip_host_reset=1

    Sieht dann so aus.

    # Configure timeout to choose the kernel # TIMEOUT="10" # Configure default kernel to boot: check all kernels in `/boot/extlinux/extlinux.conf` # DEFAULT="kernel-4.4.126-rockchip-ayufan-253" # Configure additional kernel configuration options APPEND="$APPEND root=LABEL=linux-root rootwait rootfstype=ext4 libahci.skip_host_reset=1"

    Danach waren die Platten zu sehen.

    root@rockpro64:/tmp/etc/default# blkid /dev/sda2: SEC_TYPE="msdos" LABEL_FATBOOT="boot-efi" LABEL="boot-efi" UUID="ABCD-FC7D" TYPE="vfat" PARTLABEL="boot_efi" PARTUUID="72e36967-4050-4bb3-8f8f-bf6755c38f28" /dev/sda3: LABEL="linux-boot" UUID="8e289a3e-0f9b-4da1-a147-51e03390637c" TYPE="ext4" PARTLABEL="linux_boot" PARTUUID="fe944fd2-3e42-4202-8a95-656e9bdb4be6" /dev/sda4: LABEL="linux-root" UUID="3e9513c6-dfd1-48c9-bee2-04bb5a153056" TYPE="ext4" PARTLABEL="linux_root" PARTUUID="d2d1dd88-030d-4f74-998f-7c9ce7d385d0" /dev/sdb2: SEC_TYPE="msdos" LABEL_FATBOOT="boot-efi" LABEL="boot-efi" UUID="56C9-F745" TYPE="vfat" PARTLABEL="boot_efi" PARTUUID="919c8f73-5f25-4a01-9072-3a5ed9a88ff2" /dev/sdb3: LABEL="linux-boot" UUID="23c19647-f4a1-4197-a877-f1bb03456bef" TYPE="ext4" PARTLABEL="linux_boot" PARTUUID="093d0cc0-d122-4dce-aeb5-4e266b4b7d9d" /dev/sdb4: LABEL="linux-root" UUID="f1c74331-8318-4ee8-a4f7-f0c169fb9944" TYPE="ext4" PARTLABEL="linux_root" PARTUUID="964ab457-58d5-40c4-bb02-dfd37bd2f0da" /dev/sda1: PARTLABEL="loader1" PARTUUID="37466429-e4a4-495c-b9a1-3f74625a3cae" /dev/sdb1: PARTLABEL="loader1" PARTUUID="33f692b3-54cb-4a37-b602-21a2baf32fa0"

    Aber auch hiermit ist ein Boot von der SATA Platte nicht möglich.

    Ich möchte hier noch was vom kamil zitieren.

    (11:44:09) ayufanWithPM: will look later, but this controller is tricky, also on x86 as well
    (11:44:16) ayufanWithPM: jms585 seems to be significantly more stable

    Evt. bekommt er das gefixt 😉

  • Kernel 4.4.x

    Angeheftet Images
    45
    0 Stimmen
    45 Beiträge
    4k Aufrufe
    FrankMF

    4.4.202-1237-rockchip-ayufan released

    PATCH: kernel 4.4.201-202
  • Image 0.7.8 - Latest release

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    543 Aufrufe
    Niemand hat geantwortet
  • stretch-minimal-rockpro64

    Verschoben Linux
    3
    0 Stimmen
    3 Beiträge
    1k Aufrufe
    FrankMF

    Mal ein Test was der Speicher so kann.

    rock64@rockpro64:~/tinymembench$ ./tinymembench tinymembench v0.4.9 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 2812.7 MB/s C copy backwards (32 byte blocks) : 2811.9 MB/s C copy backwards (64 byte blocks) : 2632.8 MB/s C copy : 2667.2 MB/s C copy prefetched (32 bytes step) : 2633.5 MB/s C copy prefetched (64 bytes step) : 2640.8 MB/s C 2-pass copy : 2509.8 MB/s C 2-pass copy prefetched (32 bytes step) : 2431.6 MB/s C 2-pass copy prefetched (64 bytes step) : 2424.1 MB/s C fill : 4887.7 MB/s (0.5%) C fill (shuffle within 16 byte blocks) : 4883.0 MB/s C fill (shuffle within 32 byte blocks) : 4889.3 MB/s C fill (shuffle within 64 byte blocks) : 4889.2 MB/s --- standard memcpy : 2807.3 MB/s standard memset : 4890.4 MB/s (0.3%) --- NEON LDP/STP copy : 2803.7 MB/s NEON LDP/STP copy pldl2strm (32 bytes step) : 2802.1 MB/s NEON LDP/STP copy pldl2strm (64 bytes step) : 2800.7 MB/s NEON LDP/STP copy pldl1keep (32 bytes step) : 2745.5 MB/s NEON LDP/STP copy pldl1keep (64 bytes step) : 2745.8 MB/s NEON LD1/ST1 copy : 2801.9 MB/s NEON STP fill : 4888.9 MB/s (0.3%) NEON STNP fill : 4850.1 MB/s ARM LDP/STP copy : 2803.8 MB/s ARM STP fill : 4893.0 MB/s (0.5%) ARM STNP fill : 4851.7 MB/s ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 602.5 MB/s NEON LDP/STP 2-pass copy (from framebuffer) : 551.6 MB/s NEON LD1/ST1 copy (from framebuffer) : 667.1 MB/s NEON LD1/ST1 2-pass copy (from framebuffer) : 605.6 MB/s ARM LDP/STP copy (from framebuffer) : 445.3 MB/s ARM LDP/STP 2-pass copy (from framebuffer) : 428.8 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 4.5 ns / 7.2 ns 131072 : 6.8 ns / 9.7 ns 262144 : 9.8 ns / 12.8 ns 524288 : 11.4 ns / 14.7 ns 1048576 : 16.0 ns / 22.6 ns 2097152 : 114.0 ns / 175.3 ns 4194304 : 161.7 ns / 219.9 ns 8388608 : 190.7 ns / 241.5 ns 16777216 : 205.3 ns / 250.5 ns 33554432 : 212.9 ns / 255.5 ns 67108864 : 222.3 ns / 271.1 ns