Skip to content

Kernel 4.4.x

Angeheftet Images
  • 4.4.154-1120-rockchip-ayufan released

    • UPSTREAM: usb: gadget: ether: Allow changing the MTU
    • UPSTREAM: ayufan: usb: gadget: ether: Allow jumbo frames
    • ayufan: gadget: ethernet: buffer 8 packets (wip)
  • 4.4.154-1122-rockchip-ayufan released

    • rockchip: pcie: limit bus number to 31 inclusive
  • 4.4.154-1124-rockchip-ayufan released

    • defconfig: use CONFIG_HZ=250
  • 4.4.154-1126-rockchip-ayufan released

  • 4.4.154-1128-rockchip-ayufan released ✌

    CONFIG_SQUASHFS_XZ=y (#41)

    cyberp: defconfig: squashfs xz for snap support

  • 4.4.154-1130-rockchip-ayufan released

    dts: rockpro64: Enabled sdio0 and defer it until pcie is ready

    Max defer time is 2000ms which should be enough for pcie to
    get initialized. This is a workaround for issue with unstable
    pcie training if both sdio0 and pcie are enabled in rockpro64
    device tree.

  • Der Kernel 4.4.154-1130-rockchip-ayufan funktioniert bei mir nicht! Kernel Panic!

    Hardware

    • ROCKPro64v2.1 mit 2GB RAM
    • PCIe NVMe SSD Samsung 960 EVO
    • Pine64 WiFi-Modul
    • Boot von SD-Karte
  • 4.4.154-1132-rockchip-ayufan released

  • 4.4.154-1134-rockchip-ayufan released

    ayufan: rockpro64: enable uart0 for bt

    PCIe & WiFi-Modul geht immer noch nicht. Aber, Kamil hat einen neuen Release angeschmissen.

  • 4.4.167-1138-rockchip-ayufan released

    ayufan: defconfig: disable broken kernel modules

  • 4.4.167-1140-rockchip-ayufan released

    ayufan: stmmac: disable TX offload for mtu bigger than 1498

    Zum Thema bitte auch diesen Beitrag lesen.

    • 4.4.167-1146-rockchip-ayufan
    • 4.4.167-1148-rockchip-ayufan
    • 4.4.167-1151-rockchip-ayufan
    • 4.4.167-1153-rockchip-ayufan

    Änderungen:

    • ayufan: rockchip-vpu: fix compilation errors
    • ayufan: dts: rockpro64: fix es8316 support
    • ayufan: dts: rockpro64: add missing gpu_power_model for MALI
    • ayufan: dts: pinebook-pro: fix support for sound-out

    Ayufan bereitet die Images für das kommende Pinbook Pro vor.

    • 4.4.167-1155-rockchip-ayufan
    • 4.4.167-1157-rockchip-ayufan
    • 4.4.167-1159-rockchip-ayufan
    • 4.4.167-1161-rockchip-ayufan

    Änderungen

    • ayufan: dts: pinebook-pro: change bt/audio supply according to Android changes
    • ayufan: dts: rock64: remove unused ir-receiver
    • ayufan: dts: pinebook-pro: fix display port output
    • ayufan: dts: pinebook-pro: fix display port output
  • 4.4.167-1165-rockchip-ayufan released

    • ayufan: dts: rockpro64: enable 1.992GHz OPP
  • 4.4.167-1167-rockchip-ayufan released

    • ayufan: dts: pinebook-pro: fix eDP resolution
  • 4.4.167-1169-rockchip-ayufan released

    • nuumio: dts/c: rockpro64: add pcie scan sleep and enable it for rockpro64 (#45)

  • 4.4.167-1171-rockchip-ayufan released
    4.4.167-1173-rockchip-ayufan released

    • ayufan: dts: rockpro64: configure dmc/dfi
    • ayufan: dts: rockpro64: reconfigure OPPs for cpul/b
  • 4.4.167-1175-rockchip-ayufan released

    • Old driver is rockchip-drm-rga

  • 4.4.167-1178-rockchip-ayufan released
    4.4.167-1181-rockchip-ayufan released

    • ayufan: defconfig: enable CONFIG_ROCKCHIP_RGA2
    • ayufan: dts: rockpro64: enable 32MB ion
  • 4.4.167-1183-rockchip-ayufan released

    • ayufan: dts: rock64: limit DDR to 1600MHz

  • ROCKPro64 - Debian Bullseye Teil 3

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    312 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 - PCIe SATA-Karte mit JMicron JMS585- Chip

    Angeheftet Hardware
    13
    1 Stimmen
    13 Beiträge
    2k Aufrufe
    FrankMF

    Ich möchte das dann hier zum Abschluss bringen, das NAS ist heute zusammengebaut worden. Hier zwei Fotos.

    IMG_20200425_102156_ergebnis.jpg

    IMG_20200425_102206_ergebnis.jpg

  • Serielle Konsole UART2 (2)

    Angeheftet Hardware
    1
    0 Stimmen
    1 Beiträge
    220 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 - WLan-Antennen

    Hardware
    1
    0 Stimmen
    1 Beiträge
    275 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 - Armbian - Boot Ausgabe ändern

    Verschoben Armbian
    1
    0 Stimmen
    1 Beiträge
    477 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 - Stromverbrauch

    Hardware
    1
    0 Stimmen
    1 Beiträge
    795 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 updaten

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    579 Aufrufe
    Niemand hat geantwortet
  • stretch-minimal-rockpro64

    Verschoben Linux
    3
    0 Stimmen
    3 Beiträge
    1k Aufrufe
    FrankMF

    Mal ein Test was der Speicher so kann.

    rock64@rockpro64:~/tinymembench$ ./tinymembench tinymembench v0.4.9 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 2812.7 MB/s C copy backwards (32 byte blocks) : 2811.9 MB/s C copy backwards (64 byte blocks) : 2632.8 MB/s C copy : 2667.2 MB/s C copy prefetched (32 bytes step) : 2633.5 MB/s C copy prefetched (64 bytes step) : 2640.8 MB/s C 2-pass copy : 2509.8 MB/s C 2-pass copy prefetched (32 bytes step) : 2431.6 MB/s C 2-pass copy prefetched (64 bytes step) : 2424.1 MB/s C fill : 4887.7 MB/s (0.5%) C fill (shuffle within 16 byte blocks) : 4883.0 MB/s C fill (shuffle within 32 byte blocks) : 4889.3 MB/s C fill (shuffle within 64 byte blocks) : 4889.2 MB/s --- standard memcpy : 2807.3 MB/s standard memset : 4890.4 MB/s (0.3%) --- NEON LDP/STP copy : 2803.7 MB/s NEON LDP/STP copy pldl2strm (32 bytes step) : 2802.1 MB/s NEON LDP/STP copy pldl2strm (64 bytes step) : 2800.7 MB/s NEON LDP/STP copy pldl1keep (32 bytes step) : 2745.5 MB/s NEON LDP/STP copy pldl1keep (64 bytes step) : 2745.8 MB/s NEON LD1/ST1 copy : 2801.9 MB/s NEON STP fill : 4888.9 MB/s (0.3%) NEON STNP fill : 4850.1 MB/s ARM LDP/STP copy : 2803.8 MB/s ARM STP fill : 4893.0 MB/s (0.5%) ARM STNP fill : 4851.7 MB/s ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 602.5 MB/s NEON LDP/STP 2-pass copy (from framebuffer) : 551.6 MB/s NEON LD1/ST1 copy (from framebuffer) : 667.1 MB/s NEON LD1/ST1 2-pass copy (from framebuffer) : 605.6 MB/s ARM LDP/STP copy (from framebuffer) : 445.3 MB/s ARM LDP/STP 2-pass copy (from framebuffer) : 428.8 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 4.5 ns / 7.2 ns 131072 : 6.8 ns / 9.7 ns 262144 : 9.8 ns / 12.8 ns 524288 : 11.4 ns / 14.7 ns 1048576 : 16.0 ns / 22.6 ns 2097152 : 114.0 ns / 175.3 ns 4194304 : 161.7 ns / 219.9 ns 8388608 : 190.7 ns / 241.5 ns 16777216 : 205.3 ns / 250.5 ns 33554432 : 212.9 ns / 255.5 ns 67108864 : 222.3 ns / 271.1 ns