Skip to content

Tehuti Networks Ltd. TN9710P 10GBase-T/NBASE-T Ethernet Adapter

Hardware
  • Diese 10Gbps Netzwerk-Karte hat Kamil getestet und sie auch zum Laufen bekommen.

    (23:56:37) ayufan: Tehuti Networks Ltd. TN9710P 10GBase-T/NBASE-T Ethernet Adapter it does work

    Er hat die Unterstützung in beide Kernel eingebaut (4.4.x & 4.19.x).

    Es müßte sich um diese Karte handeln.
    http://www.tehutinetworks.net/?t=LV&L1=3&L2=0&L3=0&L7=156
    Preis und Liefermöglichkeit konnte ich nicht recherchieren.

  • This repo contains the tn40xx Linux driver for 10Gbit NICs based on the TN4010 MAC from Tehuti Networks.

    This driver enables the following 10Gb SFP+ NICs:

    D-Link DXE-810S
    Edimax EN-9320SFP+
    StarTech PEX10000SFP
    Synology E10G15-F1
    ... as well as the following 10GBase-T/NBASE-T NICs:

    D-Link DXE-810T
    Edimax EN-9320TX-E
    EXSYS EX-6061-2
    Intellinet 507950
    StarTech ST10GSPEXNB

    Quelle: https://github.com/ayufan-rock64/tn40xx-driver/tree/master

  • Mainline 5.11.x

    Images
    1
    0 Stimmen
    1 Beiträge
    220 Aufrufe
    Niemand hat geantwortet
  • Image 0.9.16 - Kurztest

    ROCKPro64
    5
    0 Stimmen
    5 Beiträge
    365 Aufrufe
    FrankMF

    Kurzer Test, ok ist was länger geworden 🙂 Mit Debian Buster Minimal habe ich es nicht hinbekommen 😞 Das soll aber nicht heißen, das es nicht geht. WLan auf der Konsole ist nicht meine Stärke. Ok, dann Desktop.

    bionic-mate-rockpro64-0.9.16-1163-armhf.img.xz

    Installiert, kurz WLan 5G aktiviert, eingeloggt. Netzwerkkabel entfernt. Firefox angeworfen, Rammstein Viedo in 1080p angeworfen. Läuft alles einwandfrei.

    b834128c-30c3-43cd-ba43-b69b41783b57-grafik.png

    Und PCIe NVMe SSD geht auch 😉

    Das Desktop System ist mittlerweile richtig gut zu benutzen. Aber ich bin verwöhnt, mir ist das immer viel zu langsam. Das soll aber niemanden davon abhalten, sich das mal anzusehen. Je nach Einsatzzweck sicherlich interessant.

  • 0 Stimmen
    8 Beiträge
    1k Aufrufe
    FrankMF

    Die Verlinkung hatte ich überlesen, sorry.

    Es gibt nur eine Handvoll Karten, die im PCIe Port funktionieren. Warum, kann ich dir leider nicht beantworten. Es liegt aber mit Sicherheit an falschen Einstellungen im Kernel und an fehlenden Treibern. Ich habe hier auch eine andere Karte rumliegen, die erzeugt immer nur eine Kernel Panic 😞

    In diesem Thread steht einiges was geht und was nicht.
    https://forum.pine64.org/showthread.php?tid=6459

  • 0 Stimmen
    1 Beiträge
    463 Aufrufe
    Niemand hat geantwortet
  • USB-Adapter für eMMC-Modul

    Hardware
    1
    0 Stimmen
    1 Beiträge
    1k Aufrufe
    Niemand hat geantwortet
  • NAS Gehäuse für den ROCKPro64

    Verschoben Hardware
    4
    0 Stimmen
    4 Beiträge
    2k Aufrufe
    FrankMF
    POWER-LED

    Die LEDs werden mit 3,3 Volt versorgt. Das ist jetzt recht einfach 😉

    POWER LED + / Pi2-Connector Pin 1 (3,3V) POWER-LED - / Pi2-Connector Pin 9 (GND)

    Pi2-Connector

    0_1537358092990_IMG_20180919_134656_ergebnis.jpg

    0_1537358113178_IMG_20180919_134731_ergebnis.jpg

  • USB 3.0 - SATA Adapter

    Hardware
    2
    0 Stimmen
    2 Beiträge
    728 Aufrufe
    FrankMF

    Heute das Ganze mal mit einer Samsung 860 Pro mit 256GB. Eingesetztes Filesystem ext4

    rock64@rockpro64v2_1:/mnt$ uname -a Linux rockpro64v2_1 4.4.132-1077-rockchip-ayufan-gbaf35a9343cb #1 SMP Mon Jul 30 14:06:57 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux Speedtest rock64@rockpro64v2_1:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 Iozone: Performance Test of File I/O Version $Revision: 3.429 $ Compiled for 64 bit mode. Build: linux Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer, Vangel Bojaxhi, Ben England, Vikentsi Lapa. Run began: Tue Jul 31 14:27:17 2018 Include fsync in write timing O_DIRECT feature enabled Auto Mode File size set to 102400 kB Record Size 4 kB Record Size 16 kB Record Size 512 kB Record Size 1024 kB Record Size 16384 kB Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 102400 4 17896 23350 30390 31362 21611 14611 102400 16 56756 59180 86296 93819 51778 57327 102400 512 201347 221961 220840 222338 210887 230781 102400 1024 253752 273695 263884 266256 250153 273528 102400 16384 351112 356007 366417 372264 368721 356177 iozone test complete. DD Schreiben rock64@rockpro64v2_1:/mnt$ sudo dd if=/dev/zero of=sd.img bs=1M count=4096 conv=fdatasync 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 12.8358 s, 335 MB/s Lesen rock64@rockpro64v2_1:/mnt$ sudo dd if=sd.img of=/dev/null bs=1M count=4096 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 11.4787 s, 374 MB/s Fazit

    Damit scheint der Adapter ganz gut am USB3.0 zu funktionieren. Die Schreibgeschwindigkeit ist ca. dreimal höher als mit der anderen SSD. 😉

  • stretch-minimal-rockpro64

    Verschoben Linux
    3
    0 Stimmen
    3 Beiträge
    982 Aufrufe
    FrankMF

    Mal ein Test was der Speicher so kann.

    rock64@rockpro64:~/tinymembench$ ./tinymembench tinymembench v0.4.9 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 2812.7 MB/s C copy backwards (32 byte blocks) : 2811.9 MB/s C copy backwards (64 byte blocks) : 2632.8 MB/s C copy : 2667.2 MB/s C copy prefetched (32 bytes step) : 2633.5 MB/s C copy prefetched (64 bytes step) : 2640.8 MB/s C 2-pass copy : 2509.8 MB/s C 2-pass copy prefetched (32 bytes step) : 2431.6 MB/s C 2-pass copy prefetched (64 bytes step) : 2424.1 MB/s C fill : 4887.7 MB/s (0.5%) C fill (shuffle within 16 byte blocks) : 4883.0 MB/s C fill (shuffle within 32 byte blocks) : 4889.3 MB/s C fill (shuffle within 64 byte blocks) : 4889.2 MB/s --- standard memcpy : 2807.3 MB/s standard memset : 4890.4 MB/s (0.3%) --- NEON LDP/STP copy : 2803.7 MB/s NEON LDP/STP copy pldl2strm (32 bytes step) : 2802.1 MB/s NEON LDP/STP copy pldl2strm (64 bytes step) : 2800.7 MB/s NEON LDP/STP copy pldl1keep (32 bytes step) : 2745.5 MB/s NEON LDP/STP copy pldl1keep (64 bytes step) : 2745.8 MB/s NEON LD1/ST1 copy : 2801.9 MB/s NEON STP fill : 4888.9 MB/s (0.3%) NEON STNP fill : 4850.1 MB/s ARM LDP/STP copy : 2803.8 MB/s ARM STP fill : 4893.0 MB/s (0.5%) ARM STNP fill : 4851.7 MB/s ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 602.5 MB/s NEON LDP/STP 2-pass copy (from framebuffer) : 551.6 MB/s NEON LD1/ST1 copy (from framebuffer) : 667.1 MB/s NEON LD1/ST1 2-pass copy (from framebuffer) : 605.6 MB/s ARM LDP/STP copy (from framebuffer) : 445.3 MB/s ARM LDP/STP 2-pass copy (from framebuffer) : 428.8 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 4.5 ns / 7.2 ns 131072 : 6.8 ns / 9.7 ns 262144 : 9.8 ns / 12.8 ns 524288 : 11.4 ns / 14.7 ns 1048576 : 16.0 ns / 22.6 ns 2097152 : 114.0 ns / 175.3 ns 4194304 : 161.7 ns / 219.9 ns 8388608 : 190.7 ns / 241.5 ns 16777216 : 205.3 ns / 250.5 ns 33554432 : 212.9 ns / 255.5 ns 67108864 : 222.3 ns / 271.1 ns