mirror of
https://github.com/mochi-mqtt/server.git
synced 2025-10-24 08:33:14 +08:00
Compare commits
50 Commits
recommende
...
v2.6.6
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8f52b891d5 | ||
|
|
47536b77f2 | ||
|
|
830de149dc | ||
|
|
34f9370f8c | ||
|
|
dc272d2c36 | ||
|
|
82c96fa4e3 | ||
|
|
01f81ebeee | ||
|
|
cc3f827fc1 | ||
|
|
5966c7fe0d | ||
|
|
b26e03a433 | ||
|
|
6fc4027a78 | ||
|
|
b9d2dfb824 | ||
|
|
d5d9b02b28 | ||
|
|
64ea905c41 | ||
|
|
57997ef0c1 | ||
|
|
21491d9b4e | ||
|
|
cb217cd3b3 | ||
|
|
6b3b30e412 | ||
|
|
e2cb688869 | ||
|
|
d048e4bef7 | ||
|
|
47162a3770 | ||
|
|
074e1b06ae | ||
|
|
26418c6fd8 | ||
|
|
26720c2f6e | ||
|
|
d30592b95b | ||
|
|
40e9cdb383 | ||
|
|
e9f72154b6 | ||
|
|
69412dd23c | ||
|
|
686c35ac0c | ||
|
|
65c78534dc | ||
|
|
83db7fff56 | ||
|
|
10a02ab3c7 | ||
|
|
5058333f36 | ||
|
|
b2ab984949 | ||
|
|
5523d15a9b | ||
|
|
c6c7c296f6 | ||
|
|
4c682384c5 | ||
|
|
624dde0986 | ||
|
|
dc4eecdfb7 | ||
|
|
e8f151bf1f | ||
|
|
4983b6b977 | ||
|
|
99e50ae74e | ||
|
|
8e52e49b94 | ||
|
|
4c0c862dcd | ||
|
|
2f2d867170 | ||
|
|
916d022093 | ||
|
|
11e0256959 | ||
|
|
858a28ea4b | ||
|
|
82f32cf53d | ||
|
|
b4e2c61a72 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,4 +1,5 @@
|
||||
cmd/mqtt
|
||||
.DS_Store
|
||||
*.db
|
||||
.idea
|
||||
.idea
|
||||
vendor
|
||||
13
Dockerfile
13
Dockerfile
@@ -11,21 +11,12 @@ RUN go mod download
|
||||
|
||||
COPY . ./
|
||||
|
||||
RUN go build -o /app/mochi ./cmd
|
||||
|
||||
RUN go build -o /app/mochi ./cmd/docker
|
||||
|
||||
FROM alpine
|
||||
|
||||
WORKDIR /
|
||||
COPY --from=builder /app/mochi .
|
||||
|
||||
# tcp
|
||||
EXPOSE 1883
|
||||
|
||||
# websockets
|
||||
EXPOSE 1882
|
||||
|
||||
# dashboard
|
||||
EXPOSE 8080
|
||||
|
||||
ENTRYPOINT [ "/mochi" ]
|
||||
CMD ["/cmd/docker", "--config", "config.yaml"]
|
||||
92
README-CN.md
92
README-CN.md
@@ -10,7 +10,8 @@
|
||||
|
||||
</p>
|
||||
|
||||
[English](README.md) | [简体中文](README-CN.md)
|
||||
[English](README.md) | [简体中文](README-CN.md) | [日本語](README-JP.md) | [招募翻译者!](https://github.com/orgs/mochi-mqtt/discussions/310)
|
||||
|
||||
|
||||
🎆 **mochi-co/mqtt 现在已经是新的 mochi-mqtt 组织的一部分。** 详细信息请[阅读公告.](https://github.com/orgs/mochi-mqtt/discussions/271)
|
||||
|
||||
@@ -44,13 +45,13 @@ MQTT 代表 MQ Telemetry Transport。它是一种发布/订阅、非常简单和
|
||||
- 通过所有 [Paho互操作性测试](https://github.com/eclipse/paho.mqtt.testing/tree/master/interoperability)(MQTT v5 和 MQTT v3)。
|
||||
- 超过一千多个经过仔细考虑的单元测试场景。
|
||||
- 支持 TCP、Websocket(包括 SSL/TLS)和$SYS 服务状态监控。
|
||||
- 内置 基于Redis、Badger 和 Bolt 的持久化(使用Hook钩子,你也可以自己创建)。
|
||||
- 内置 基于Redis、Badger、Pebble 和 Bolt 的持久化(使用Hook钩子,你也可以自己创建)。
|
||||
- 内置基于规则的认证和 ACL 权限管理(使用Hook钩子,你也可以自己创建)。
|
||||
|
||||
### 兼容性说明(Compatibility Notes)
|
||||
由于 v5 规范与 MQTT 的早期版本存在重叠,因此服务器可以接受 v5 和 v3 客户端,但在连接了 v5 和 v3 客户端的情况下,为 v5 客户端提供的属性和功能将会对 v3 客户端进行降级处理(例如用户属性)。
|
||||
|
||||
对于 MQTT v3.0.0 和 v3.1.1 的支持被视为混合兼容性。在 v3 规范中没有明确限制的情况下,将使用更新的和以安全为首要考虑的 v5 规范 - 例如保留的消息(retained messages)的过期处理,传输中的消息(inflight messages)的过期处理、客户端过期处理以及QOS消息数量的限制等。
|
||||
对于 MQTT v3.0.0 和 v3.1.1 的支持被视为混合兼容性。在 v3 规范中没有明确限制的情况下,将使用更新的和以安全为首要考虑的 v5 规范 - 例如保留的消息(retained messages)的过期处理,待发送消息(inflight messages)的过期处理、客户端过期处理以及QOS消息数量的限制等。
|
||||
|
||||
#### 版本更新时间
|
||||
除非涉及关键问题,新版本通常在周末发布。
|
||||
@@ -59,7 +60,6 @@ MQTT 代表 MQ Telemetry Transport。它是一种发布/订阅、非常简单和
|
||||
- 请[提出问题](https://github.com/mochi-mqtt/server/issues)来请求新功能或新的hook钩子接口!
|
||||
- 集群支持。
|
||||
- 统计度量支持。
|
||||
- 配置文件支持(支持 Docker)。
|
||||
|
||||
## 快速开始(Quick Start)
|
||||
### 使用 Go 运行服务端
|
||||
@@ -72,12 +72,53 @@ go build -o mqtt && ./mqtt
|
||||
|
||||
### 使用 Docker
|
||||
|
||||
我们提供了一个简单的 Dockerfile,用于运行 cmd/main.go 中的 Websocket、TCP 和统计信息服务器:
|
||||
你现在可以从 Docker Hub 仓库中拉取并运行Mochi MQTT[官方镜像](https://hub.docker.com/r/mochimqtt/server):
|
||||
```sh
|
||||
docker pull mochimqtt/server
|
||||
或者
|
||||
docker run -v $(pwd)/config.yaml:/config.yaml mochimqtt/server
|
||||
```
|
||||
|
||||
一般情况下,您可以使用基于文件的方式来配置服务端,只需指定一个有效的 yaml 或 json 配置文件。
|
||||
我们提供了一个简单的 Dockerfile,用于运行 [cmd/main.go](cmd/main.go) 中的 Websocket(:1882)、TCP(:1883) 和服务端状态信息(:8080)这三个网络服务,它使用了一个 allow-all 的鉴权策略(Hook)。
|
||||
|
||||
```sh
|
||||
docker build -t mochi:latest .
|
||||
docker run -p 1883:1883 -p 1882:1882 -p 8080:8080 mochi:latest
|
||||
docker run -p 1883:1883 -p 1882:1882 -p 8080:8080 -v $(pwd)/config.yaml:/config.yaml mochi:latest
|
||||
```
|
||||
更多关于 Docker 的支持正在[这里](https://github.com/orgs/mochi-mqtt/discussions/281#discussion-5544545)和[这里](https://github.com/orgs/mochi-mqtt/discussions/209)进行讨论。如果你有在这个场景下使用 Mochi-MQTT,也可以参与到讨论中来。
|
||||
|
||||
### 基于文件的配置
|
||||
你可以使用基于文件的配置与 Docker 镜像(上节所述)一起使用,或者通过运行编译好的可执行文件并使用 `--config=config.yaml` 或 `--config=config.json` 指定配置文件。
|
||||
|
||||
配置文件使得服务端更易于管理和维护。你可以启用和配置内置的钩子(hooks)和监听器(listeners),并指定服务器的一些选项(options)和能力(compatibilities):
|
||||
|
||||
```yaml
|
||||
listeners:
|
||||
- type: "tcp"
|
||||
id: "tcp12"
|
||||
address: ":1883"
|
||||
- type: "ws"
|
||||
id: "ws1"
|
||||
address: ":1882"
|
||||
- type: "sysinfo"
|
||||
id: "stats"
|
||||
address: ":1880"
|
||||
hooks:
|
||||
auth:
|
||||
allow_all: true
|
||||
options:
|
||||
inline_client: true
|
||||
```
|
||||
|
||||
你可以参考请 [examples/config](examples/config) 中的示例,以了解所有可用的配置。
|
||||
有一些需要注意的地方:
|
||||
|
||||
1. 如果你使用基于文件的配置,现在支持配置的hook类型只有auth、storage、debug这三种,每种类型的钩子只能有一个。
|
||||
2. 你只能在基于文件的配置中使用内置钩子(mochi-mqtt里面默认已经存在的hook,你自己创建的不算),因为钩子的配置需要先跟conf.toml的结构匹配。
|
||||
3. 你只能使用内置监听器(listeners),原因同上。
|
||||
|
||||
如果你需要实现自定义的钩子(Hooks)或监听器(listeners),请使用 [cmd/main.go](cmd/main.go) 中那样的传统方式来实现。
|
||||
|
||||
|
||||
## 使用 Mochi MQTT 进行开发
|
||||
### 将Mochi MQTT作为包导入使用
|
||||
@@ -108,7 +149,7 @@ func main() {
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
// 在标1883端口上创建一个 TCP 服务端。
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{ID: "t1", Address: ":1883"})
|
||||
err := server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
@@ -156,6 +197,7 @@ func main() {
|
||||
server := mqtt.New(&mqtt.Options{
|
||||
Capabilities: mqtt.Capabilities{
|
||||
MaximumSessionExpiryInterval: 3600,
|
||||
MaximumClientWritesPending: 3,
|
||||
Compatibilities: mqtt.Compatibilities{
|
||||
ObscureNotAuthorized: true,
|
||||
},
|
||||
@@ -166,8 +208,13 @@ server := mqtt.New(&mqtt.Options{
|
||||
InlineClient: false,
|
||||
})
|
||||
```
|
||||
请参考 mqtt.Options、mqtt.Capabilities 和 mqtt.Compatibilities 结构体,以查看完整的所有服务端选项。ClientNetWriteBufferSize 和 ClientNetReadBufferSize 可以根据你的需求配置调整每个客户端的内存使用状况。
|
||||
请参考 mqtt.Options、mqtt.Capabilities 和 mqtt.Compatibilities 结构体,以查看完整的所有服务端选项。 ClientNetWriteBufferSize 和 ClientNetReadBufferSize 可以根据你的需求配置调整每个客户端的内存使用状况。其中 Capabilities.MaximumClientWritesPending 的大小会影响服务器运行内存占用,如果 IoT 设备同时在线的数量比较多,设置的值很大,尽管没有收发数据,服务器运行内存占用也会增加很多,默认该数值为 1024*8 ,可以根据实际情况调整该参数。
|
||||
|
||||
### 默认配置说明(Default Configuration Notes)
|
||||
|
||||
关于决定默认配置的值,在这里进行一些说明:
|
||||
|
||||
- 默认情况下,server.Options.Capabilities.MaximumMessageExpiryInterval 的值被设置为 86400(24小时),以防止在使用默认配置时网络上暴露服务器而受到恶意DOS攻击(如果不配置到期时间将允许无限数量的保留retained/待发送inflight消息累积)。如果您在一个受信任的环境中运行,或者您有更大的保留期容量,您可以选择覆盖此设置(设置为0 以取消到期限制)。
|
||||
|
||||
## 事件钩子(Event Hooks)
|
||||
|
||||
@@ -181,10 +228,11 @@ server := mqtt.New(&mqtt.Options{
|
||||
| 访问控制 | [mochi-mqtt/server/hooks/auth . Auth](hooks/auth/auth.go) | 基于规则的访问权限控制。 |
|
||||
| 数据持久性 | [mochi-mqtt/server/hooks/storage/bolt](hooks/storage/bolt/bolt.go) | 使用 [BoltDB](https://dbdb.io/db/boltdb) 进行持久性存储(已弃用)。 |
|
||||
| 数据持久性 | [mochi-mqtt/server/hooks/storage/badger](hooks/storage/badger/badger.go) | 使用 [BadgerDB](https://github.com/dgraph-io/badger) 进行持久性存储。 |
|
||||
| 数据持久性 | [mochi-mqtt/server/hooks/storage/pebble](hooks/storage/pebble/pebble.go) | 使用 [PebbleDB](https://github.com/cockroachdb/pebble) 进行持久性存储。 |
|
||||
| 数据持久性 | [mochi-mqtt/server/hooks/storage/redis](hooks/storage/redis/redis.go) | 使用 [Redis](https://redis.io) 进行持久性存储。 |
|
||||
| 调试跟踪 | [mochi-mqtt/server/hooks/debug](hooks/debug/debug.go) | 调试输出以查看数据包在服务端的链路追踪。 |
|
||||
|
||||
许多内部函数都已开放给开发者,你可以参考上述示例创建自己的Hook钩子。如果你有更好的关于Hook钩子方面的建议或者疑问,你可以[提交问题](https://github.com/mochi-mqtt/server/issues)给我们。 |
|
||||
许多内部函数都已开放给开发者,你可以参考上述示例创建自己的Hook钩子。如果你有更好的关于Hook钩子方面的建议或者疑问,你可以[提交问题](https://github.com/mochi-mqtt/server/issues)给我们。
|
||||
|
||||
### 访问控制(Access Control)
|
||||
|
||||
@@ -263,7 +311,7 @@ err := server.AddHook(new(auth.Hook), &auth.Options{
|
||||
```
|
||||
详细信息请参阅 [examples/auth/encoded/main.go](examples/auth/encoded/main.go)。
|
||||
|
||||
### 持久化存储
|
||||
### 持久化存储(Persistent Storage)
|
||||
|
||||
#### Redis
|
||||
|
||||
@@ -283,9 +331,25 @@ if err != nil {
|
||||
```
|
||||
有关 Redis 钩子的工作原理或如何使用它的更多信息,请参阅 [examples/persistence/redis/main.go](examples/persistence/redis/main.go) 或 [hooks/storage/redis](hooks/storage/redis) 。
|
||||
|
||||
#### Pebble DB
|
||||
|
||||
如果您更喜欢基于文件的存储,还有一个 PebbleDB 存储钩子(Hook)可用。它可以以与其他钩子大致相同的方式添加和配置(具有较少的选项)。
|
||||
|
||||
```go
|
||||
err := server.AddHook(new(pebble.Hook), &pebble.Options{
|
||||
Path: pebblePath,
|
||||
Mode: pebble.NoSync,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
有关 pebble 钩子(Hook)的工作原理或如何使用它的更多信息,请参阅 [examples/persistence/pebble/main.go](examples/persistence/pebble/main.go) 或 [hooks/storage/pebble](hooks/storage/pebble)。
|
||||
|
||||
#### Badger DB
|
||||
|
||||
如果您更喜欢基于文件的存储,还有一个 BadgerDB 存储钩子(Hook)可用。它可以以与其他钩子大致相同的方式添加和配置(具有较少的选项)。
|
||||
同样是基于文件的存储,还有一个 BadgerDB 存储钩子(Hook)可用。它可以以与其他钩子大致相同的方式添加和配置。
|
||||
|
||||
```go
|
||||
err := server.AddHook(new(badger.Hook), &badger.Options{
|
||||
@@ -343,7 +407,7 @@ if err != nil {
|
||||
| OnRetainedExpired | 在保留的消息已过期并应删除时调用。| |
|
||||
| StoredClients | 这个接口需要返回客户端列表,例如从持久化数据库中获取客户端列表。 |
|
||||
| StoredSubscriptions | 返回客户端的所有订阅,例如从持久化数据库中获取客户端的订阅列表。 |
|
||||
| StoredInflightMessages | 返回正在传输中的消息(inflight messages),例如从持久化数据库中获取到还有哪些消息未完成传输。 |
|
||||
| StoredInflightMessages | 返回待发送消息(inflight messages),例如从持久化数据库中获取到还有哪些消息未完成传输。 |
|
||||
| StoredRetainedMessages | 返回保留的消息,例如从持久化数据库获取保留的消息。 |
|
||||
| StoredSysInfo | 返回存储的系统状态信息,例如从持久化数据库获取的系统状态信息。 |
|
||||
|
||||
@@ -351,7 +415,7 @@ if err != nil {
|
||||
|
||||
### 内联客户端 (Inline Client v2.4.0+支持)
|
||||
|
||||
现在可以通过使用内联客户端功能直接在服务端上订阅主题和发布消息。内联客户端是内置在服务端中的特殊的客户端,可以在服务端的配置中启用:
|
||||
现在可以通过使用内联客户端功能直接在服务端上订阅主题和发布消息。目前,内联客户端暂时还不支持共享订阅。内联客户端是内置在服务端中的特殊的客户端,可以在服务端的配置中启用:
|
||||
|
||||
```go
|
||||
server := mqtt.New(&mqtt.Options{
|
||||
|
||||
496
README-JP.md
Normal file
496
README-JP.md
Normal file
@@ -0,0 +1,496 @@
|
||||
# Mochi-MQTT Server
|
||||
|
||||
<p align="center">
|
||||
|
||||

|
||||
[](https://coveralls.io/github/mochi-mqtt/server?branch=master)
|
||||
[](https://goreportcard.com/report/github.com/mochi-mqtt/server/v2)
|
||||
[](https://pkg.go.dev/github.com/mochi-mqtt/server/v2)
|
||||
[](https://github.com/mochi-mqtt/server/issues)
|
||||
|
||||
</p>
|
||||
|
||||
[English](README.md) | [简体中文](README-CN.md) | [日本語](README-JP.md) | [Translators Wanted!](https://github.com/orgs/mochi-mqtt/discussions/310)
|
||||
|
||||
🎆 **mochi-co/mqtt は新しい mochi-mqtt organisation の一部です.** [このページをお読みください](https://github.com/orgs/mochi-mqtt/discussions/271)
|
||||
|
||||
|
||||
### Mochi-MQTTは MQTT v5 (と v3.1.1)に完全に準拠しているアプリケーションに組み込み可能なハイパフォーマンスなbroker/serverです.
|
||||
|
||||
Mochi MQTT は Goで書かれたMQTT v5に完全に[準拠](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html)しているMQTTブローカーで、IoTプロジェクトやテレメトリの開発プロジェクト向けに設計されています。 スタンドアロンのバイナリで使ったり、アプリケーションにライブラリとして組み込むことができ、プロジェクトのメンテナンス性と品質を確保できるように配慮しながら、 軽量で可能な限り速く動作するように設計されています。
|
||||
|
||||
#### MQTTとは?
|
||||
MQTT は [MQ Telemetry Transport](https://en.wikipedia.org/wiki/MQTT)を意味します。 Pub/Sub型のシンプルで軽量なメッセージプロトコルで、低帯域、高遅延、不安定なネットワーク下での制約を考慮して設計されています([MQTTについて詳しくはこちら](https://mqtt.org/faq))。 Mochi MQTTはMQTTプロトコルv5.0.0に完全準拠した実装をしています。
|
||||
|
||||
#### Mochi-MQTTのもつ機能
|
||||
|
||||
- MQTTv5への完全な準拠とMQTT v3.1.1 および v3.0.0 との互換性:
|
||||
- MQTT v5で拡張されたユーザープロパティ
|
||||
- トピック・エイリアス
|
||||
- 共有サブスクリプション
|
||||
- サブスクリプションオプションとサブスクリプションID
|
||||
- メッセージの有効期限
|
||||
- クライアントセッション
|
||||
- 送受信QoSフロー制御クォータ
|
||||
- サーバサイド切断と認証パケット
|
||||
- Will遅延間隔
|
||||
- 上記に加えてQoS(0,1,2)、$SYSトピック、retain機能などすべてのMQTT v1の特徴を持ちます
|
||||
- Developer-centric:
|
||||
- 開発者が制御できるように、ほとんどのコアブローカーのコードをエクスポートにしてアクセスできるようにしました。
|
||||
- フル機能で柔軟なフックベースのインターフェイスにすることで簡単に'プラグイン'を開発できるようにしました。
|
||||
- 特別なインラインクライアントを利用することでパケットインジェクションを行うか、既存のクライアントとしてマスカレードすることができます。
|
||||
- パフォーマンスと安定性:
|
||||
- 古典的なツリーベースのトピックサブスクリプションモデル
|
||||
- クライアント固有に書き込みバッファーをもたせることにより、読み込みの遅さや不規則なクライアントの挙動の問題を回避しています。
|
||||
- MQTT v5 and MQTT v3のすべての[Paho互換性テスト](https://github.com/eclipse/paho.mqtt.testing/tree/master/interoperability)をpassしています。
|
||||
- 慎重に検討された多くのユニットテストシナリオでテストされています。
|
||||
- TCP, Websocket (SSL/TLSを含む), $SYSのダッシュボードリスナー
|
||||
- フックを利用した保存機能としてRedis, Badger, Boltを使うことができます(自作のHookも可能です)。
|
||||
- フックを利用したルールベース認証機能とアクセス制御リストLedgerを使うことができます(自作のHookも可能です)。
|
||||
|
||||
### 互換性に関する注意事項
|
||||
MQTTv5とそれ以前との互換性から、サーバーはv5とv3両方のクライアントを受け入れることができますが、v5とv3のクライアントが接続された場合はv5でクライアント向けの特徴と機能はv3クライアントにダウングレードされます(ユーザープロパティなど)。
|
||||
MQTT v3.0.0 と v3.1.1 のサポートはハイブリッド互換性があるとみなされます。それはv3と仕様に制限されていない場合、例えば、送信メッセージ、保持メッセージの有効期限とQoSフロー制御制限などについては、よりモダンで安全なv5の動作が使用されます
|
||||
|
||||
#### リリースされる時期について
|
||||
クリティカルなイシュー出ない限り、新しいリリースがされるのは週末です。
|
||||
|
||||
## Roadmap
|
||||
- 新しい特徴やイベントフックのリクエストは [open an issue](https://github.com/mochi-mqtt/server/issues) へ!
|
||||
- クラスターのサポート
|
||||
- メトリックスサポートの強化
|
||||
- ファイルベースの設定(Dockerイメージのサポート)
|
||||
|
||||
## Quick Start
|
||||
### GoでのBrokerの動かし方
|
||||
Mochi MQTTはスタンドアロンのブローカーとして使うことができます。単純にこのレポジトリーをチェックアウトして、[cmd/main.go](cmd/main.go) を起動すると内部の [cmd](cmd) フォルダのエントリポイントにしてtcp (:1883), websocket (:1882), dashboard (:8080)のポートを外部にEXPOSEします。
|
||||
|
||||
```
|
||||
cd cmd
|
||||
go build -o mqtt && ./mqtt
|
||||
```
|
||||
|
||||
### Dockerで利用する
|
||||
Dockerレポジトリの [official Mochi MQTT image](https://hub.docker.com/r/mochimqtt/server) から Pullして起動することができます。
|
||||
|
||||
```sh
|
||||
docker pull mochimqtt/server
|
||||
or
|
||||
docker run mochimqtt/server
|
||||
```
|
||||
|
||||
これは実装途中です。[file-based configuration](https://github.com/orgs/mochi-mqtt/projects/2) は、この実装をよりよくサポートするために開発中です。
|
||||
より実質的なdockerのサポートが議論されています。_Docker環境で使っている方は是非この議論に参加してください。_ [ここ](https://github.com/orgs/mochi-mqtt/discussions/281#discussion-5544545) や [ここ](https://github.com/orgs/mochi-mqtt/discussions/209)。
|
||||
|
||||
[cmd/main.go](cmd/main.go)の Websocket, TCP, Statsサーバを実行するために、シンプルなDockerfileが提供されます。
|
||||
|
||||
|
||||
```sh
|
||||
docker build -t mochi:latest .
|
||||
docker run -p 1883:1883 -p 1882:1882 -p 8080:8080 mochi:latest
|
||||
```
|
||||
|
||||
## Mochi MQTTを使って開発するには
|
||||
### パッケージをインポート
|
||||
Mochi MQTTをパッケージとしてインポートするにはほんの数行のコードで始めることができます。
|
||||
``` go
|
||||
import (
|
||||
"log"
|
||||
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/auth"
|
||||
"github.com/mochi-mqtt/server/v2/listeners"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Create signals channel to run server until interrupted
|
||||
sigs := make(chan os.Signal, 1)
|
||||
done := make(chan bool, 1)
|
||||
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
|
||||
go func() {
|
||||
<-sigs
|
||||
done <- true
|
||||
}()
|
||||
|
||||
// Create the new MQTT Server.
|
||||
server := mqtt.New(nil)
|
||||
|
||||
// Allow all connections.
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
// Create a TCP listener on a standard port.
|
||||
tcp := listeners.NewTCP(listeners.Config{ID: "t1", Address: ":1883"})
|
||||
err := server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
|
||||
go func() {
|
||||
err := server.Serve()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Run server until interrupted
|
||||
<-done
|
||||
|
||||
// Cleanup
|
||||
}
|
||||
```
|
||||
|
||||
ブローカーの動作例は [examples](examples)フォルダにあります。
|
||||
|
||||
#### Network Listeners
|
||||
サーバは様々なプロトコルのコネクションのリスナーに対応しています。現在の対応リスナーは、
|
||||
|
||||
| Listener | Usage |
|
||||
|------------------------------|----------------------------------------------------------------------------------------------|
|
||||
| listeners.NewTCP | TCPリスナー |
|
||||
| listeners.NewUnixSock | Unixソケットリスナー |
|
||||
| listeners.NewNet | net.Listenerリスナー |
|
||||
| listeners.NewWebsocket | Websocketリスナー |
|
||||
| listeners.NewHTTPStats | HTTP $SYSダッシュボード |
|
||||
| listeners.NewHTTPHealthCheck | ヘルスチェック応答を提供するためのHTTPヘルスチェックリスナー(クラウドインフラ) |
|
||||
|
||||
> 新しいリスナーを開発するためには `listeners.Listener` を使ってください。使ったら是非教えてください!
|
||||
|
||||
TLSを設定するには`*listeners.Config`を渡すことができます。
|
||||
|
||||
[examples](examples) フォルダと [cmd/main.go](cmd/main.go)に使用例があります。
|
||||
|
||||
|
||||
## 設定できるオプションと機能
|
||||
たくさんのオプションが利用可能です。サーバーの動作を変更したり、特定の機能へのアクセスを制限することができます。
|
||||
|
||||
```go
|
||||
server := mqtt.New(&mqtt.Options{
|
||||
Capabilities: mqtt.Capabilities{
|
||||
MaximumSessionExpiryInterval: 3600,
|
||||
MaximumClientWritesPending: 3,
|
||||
Compatibilities: mqtt.Compatibilities{
|
||||
ObscureNotAuthorized: true,
|
||||
},
|
||||
},
|
||||
ClientNetWriteBufferSize: 4096,
|
||||
ClientNetReadBufferSize: 4096,
|
||||
SysTopicResendInterval: 10,
|
||||
InlineClient: false,
|
||||
})
|
||||
```
|
||||
|
||||
mqtt.Options、mqtt.Capabilities、mqtt.Compatibilitiesの構造体はオプションの理解に役立ちます。
|
||||
必要に応じて`ClientNetWriteBufferSize`と`ClientNetReadBufferSize`はクライアントの使用するメモリに合わせて設定できます。
|
||||
`Capabilities.MaximumClientWritesPending`のサイズは、サーバーのメモリ使用量に影響を与えます。IoTデバイスが同時にオンラインで多数存在する場合、また設定値が非常に大きい場合、データの送受信がなくても、サーバーのメモリ使用量は大幅に増加します。デフォルト値は1024*8で、実際の状況に応じてこのパラメータを調整することができます。
|
||||
|
||||
### デフォルト設定に関する注意事項
|
||||
|
||||
いくつかのデフォルトの設定を決める際にいくつかの決定がなされましたのでここに記しておきます:
|
||||
- デフォルトとして、敵対的なネットワーク上のDoSアタックにさらされるのを防ぐために `server.Options.Capabilities.MaximumMessageExpiryInterval`は86400 (24時間)に、とセットされています。有効期限を無限にすると、保持、送信メッセージが無限に蓄積されるからです。もし信頼できる環境であったり、より大きな保存期間が可能であれば、この設定はオーバーライドできます(`0` を設定すると有効期限はなくなります。)
|
||||
|
||||
## Event Hooks
|
||||
ユニバーサルイベントフックシステムは、開発者にサーバとクライアントの様々なライフサイクルをフックすることができ、ブローカーの機能を追加/変更することができます。それらのユニバーサルフックは認証、永続ストレージ、デバッグツールなど、あらゆるものに使用されています。
|
||||
フックは複数重ねることができ、サーバに複数のフックを設定することができます。それらは追加した順番に動作します。いくつかのフックは値を変えて、その値は動作コードに返される前にあとに続くフックに渡されます。
|
||||
|
||||
|
||||
| Type | Import | Info |
|
||||
|----------------|--------------------------------------------------------------------------|----------------------------------------------------------------------------|
|
||||
| Access Control | [mochi-mqtt/server/hooks/auth . AllowHook](hooks/auth/allow_all.go) | すべてのトピックに対しての読み書きをすべてのクライアントに対して許可します。 |
|
||||
| Access Control | [mochi-mqtt/server/hooks/auth . Auth](hooks/auth/auth.go) | ルールベースのアクセスコントロール台帳です。 |
|
||||
| Persistence | [mochi-mqtt/server/hooks/storage/bolt](hooks/storage/bolt/bolt.go) | [BoltDB](https://dbdb.io/db/boltdb) を使った永続ストレージ (非推奨). |
|
||||
| Persistence | [mochi-mqtt/server/hooks/storage/badger](hooks/storage/badger/badger.go) | [BadgerDB](https://github.com/dgraph-io/badger)を使った永続ストレージ |
|
||||
| Persistence | [mochi-mqtt/server/hooks/storage/redis](hooks/storage/redis/redis.go) | [Redis](https://redis.io)を使った永続ストレージ |
|
||||
| Debugging | [mochi-mqtt/server/hooks/debug](hooks/debug/debug.go) | パケットフローを可視化するデバッグ用のフック |
|
||||
|
||||
たくさんの内部関数が開発者に公開されています、なので、上記の例を使って自分でフックを作ることができます。もし作ったら是非[Open an issue](https://github.com/mochi-mqtt/server/issues)に投稿して教えてください!
|
||||
|
||||
### アクセスコントロール
|
||||
#### Allow Hook
|
||||
デフォルトで、Mochi MQTTはアクセスコントロールルールにDENY-ALLを使用しています。コネクションを許可するためには、アクセスコントロールフックを上書きする必要があります。一番単純なのは`auth.AllowAll`フックで、ALLOW-ALLルールがすべてのコネクション、サブスクリプション、パブリッシュに適用されます。使い方は下記のようにするだけです:
|
||||
|
||||
```go
|
||||
server := mqtt.New(nil)
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
```
|
||||
|
||||
> もしインターネットや信頼できないネットワークにさらされる場合は行わないでください。これは開発・テスト・デバッグ用途のみであるべきです。
|
||||
|
||||
#### Auth Ledger
|
||||
Auth Ledgerは構造体で定義したアクセスルールの洗練された仕組みを提供します。Auth Ledgerルール2つの形式から成ります、認証ルール(コネクション)とACLルール(パブリッシュ、サブスクライブ)です。
|
||||
|
||||
認証ルールは4つのクライテリアとアサーションフラグがあります:
|
||||
| Criteria | Usage |
|
||||
| -- | -- |
|
||||
| Client | 接続クライアントのID |
|
||||
| Username | 接続クライアントのユーザー名 |
|
||||
| Password | 接続クライアントのパスワード |
|
||||
| Remote | クライアントのリモートアドレスもしくはIP |
|
||||
| Allow | true(このユーザーを許可する)もしくはfalse(このユーザを拒否する) |
|
||||
|
||||
アクセスコントロールルールは3つのクライテリアとフィルターマッチがあります:
|
||||
| Criteria | Usage |
|
||||
| -- | -- |
|
||||
| Client | 接続クライアントのID |
|
||||
| Username | 接続クライアントのユーザー名 |
|
||||
| Remote | クライアントのリモートアドレスもしくはIP |
|
||||
| Filters | 合致するフィルターの配列 |
|
||||
|
||||
ルールはインデックス順(0,1,2,3)に処理され、はじめに合致したルールが適用されます。 [hooks/auth/ledger.go](hooks/auth/ledger.go) の構造体を見てください。
|
||||
|
||||
|
||||
```go
|
||||
server := mqtt.New(nil)
|
||||
err := server.AddHook(new(auth.Hook), &auth.Options{
|
||||
Ledger: &auth.Ledger{
|
||||
Auth: auth.AuthRules{ // Auth disallows all by default
|
||||
{Username: "peach", Password: "password1", Allow: true},
|
||||
{Username: "melon", Password: "password2", Allow: true},
|
||||
{Remote: "127.0.0.1:*", Allow: true},
|
||||
{Remote: "localhost:*", Allow: true},
|
||||
},
|
||||
ACL: auth.ACLRules{ // ACL allows all by default
|
||||
{Remote: "127.0.0.1:*"}, // local superuser allow all
|
||||
{
|
||||
// user melon can read and write to their own topic
|
||||
Username: "melon", Filters: auth.Filters{
|
||||
"melon/#": auth.ReadWrite,
|
||||
"updates/#": auth.WriteOnly, // can write to updates, but can't read updates from others
|
||||
},
|
||||
},
|
||||
{
|
||||
// Otherwise, no clients have publishing permissions
|
||||
Filters: auth.Filters{
|
||||
"#": auth.ReadOnly,
|
||||
"updates/#": auth.Deny,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
ledgeはデータフィールドを使用してJSONもしくはYAML形式で保存したものを使用することもできます。
|
||||
```go
|
||||
err := server.AddHook(new(auth.Hook), &auth.Options{
|
||||
Data: data, // build ledger from byte slice: yaml or json
|
||||
})
|
||||
```
|
||||
より詳しくは[examples/auth/encoded/main.go](examples/auth/encoded/main.go)を見てください。
|
||||
|
||||
### 永続ストレージ
|
||||
#### Redis
|
||||
ブローカーに永続性を提供する基本的な Redis ストレージフックが利用可能です。他のフックと同じ方法で、いくつかのオプションを使用してサーバーに追加できます。それはフック内部で github.com/go-redis/redis/v8 を使用し、Optionsの値で詳しい設定を行うことができます。
|
||||
```go
|
||||
err := server.AddHook(new(redis.Hook), &redis.Options{
|
||||
Options: &rv8.Options{
|
||||
Addr: "localhost:6379", // default redis address
|
||||
Password: "", // your password
|
||||
DB: 0, // your redis db
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
Redisフックがどのように動くか、どのように使用するかについての詳しくは、[examples/persistence/redis/main.go](examples/persistence/redis/main.go) か [hooks/storage/redis](hooks/storage/redis) のソースコードを見てください。
|
||||
|
||||
#### Badger DB
|
||||
もしファイルベースのストレージのほうが適しているのであれば、BadgerDBストレージも使用することができます。それもまた、他のフックと同様に追加、設定することができます(オプションは若干少ないです)。
|
||||
|
||||
```go
|
||||
err := server.AddHook(new(badger.Hook), &badger.Options{
|
||||
Path: badgerPath,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
badgerフックがどのように動くか、どのように使用するかについての詳しくは、[examples/persistence/badger/main.go](examples/persistence/badger/main.go) か [hooks/storage/badger](hooks/storage/badger) のソースコードを見てください。
|
||||
|
||||
BoltDBフックはBadgerに代わって非推奨となりましたが、もし必要ならば [examples/persistence/bolt/main.go](examples/persistence/bolt/main.go)をチェックしてください。
|
||||
|
||||
## イベントフックを利用した開発
|
||||
|
||||
ブローカーとクライアントのライフサイクルに関わるたくさんのフックが利用できます。
|
||||
そのすべてのフックと`mqtt.Hook`インターフェイスの関数シグネチャは[hooks.go](hooks.go)に記載されています。
|
||||
|
||||
> もっと柔軟なイベントフックはOnPacketRead、OnPacketEncodeとOnPacketSentです。それらは、すべての流入パケットと流出パケットをコントロール及び変更に使用されるフックです。
|
||||
|
||||
|
||||
| Function | Usage |
|
||||
|------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| OnStarted | サーバーが正常にスタートした際に呼ばれます。 |
|
||||
| OnStopped | サーバーが正常に終了した際に呼ばれます。 |
|
||||
| OnConnectAuthenticate | ユーザーがサーバと認証を試みた際に呼ばれます。このメソッドはサーバーへのアクセス許可もしくは拒否するためには必ず使用する必要があります(hooks/auth/allow_all or basicを見てください)。これは、データベースにユーザーが存在するか照合してチェックするカスタムフックに利用できます。許可する場合はtrueを返す実装をします。|
|
||||
| OnACLCheck | ユーザーがあるトピックフィルタにpublishかsubscribeした際に呼ばれます。上と同様です |
|
||||
| OnSysInfoTick | $SYSトピック値がpublishされた場合に呼ばれます。 |
|
||||
| OnConnect | 新しいクライアントが接続した際によばれます、エラーかパケットコードを返して切断する場合があります。 |
|
||||
| OnSessionEstablish | 新しいクライアントが接続された後すぐ、セッションが確立されてCONNACKが送信される前に呼ばれます。 |
|
||||
| OnSessionEstablished | 新しいクライアントがセッションを確立した際(OnConnectの後)に呼ばれます。 |
|
||||
| OnDisconnect | クライアントが何らかの理由で切断された場合に呼ばれます。 |
|
||||
| OnAuthPacket | 認証パケットを受け取ったときに呼ばれます。これは開発者にmqtt v5の認証パケットを取り扱う仕組みを作成すること意図しています。パケットを変更することができます。 |
|
||||
| OnPacketRead | クライアントからパケットを受け取った際に呼ばれます。パケットを変更することができます。 |
|
||||
| OnPacketEncode | エンコードされたパケットがクライアントに送信する直前に呼ばれます。パケットを変更することができます。 |
|
||||
| OnPacketSent | クライアントにパケットが送信された際に呼ばれます。 |
|
||||
| OnPacketProcessed | パケットが届いてブローカーが正しく処理できた場合に呼ばれます。 |
|
||||
| OnSubscribe | クライアントが1つ以上のフィルタをsubscribeした場合に呼ばれます。パケットの変更ができます。 |
|
||||
| OnSubscribed | クライアントが1つ以上のフィルタをsubscribeに成功した場合に呼ばれます。 |
|
||||
| OnSelectSubscribers | サブスクライバーがトピックに収集されたとき、共有サブスクライバーが選択される前に呼ばれる。受信者は変更可能。 |
|
||||
| OnUnsubscribe | 1つ以上のあんサブスクライブが呼ばれた場合。パケットの変更は可能。 |
|
||||
| OnUnsubscribed | クライアントが正常に1つ以上のトピックフィルタをサブスクライブ解除した場合。 |
|
||||
| OnPublish | クライアントがメッセージをパブリッシュした場合。パケットの変更は可能。 |
|
||||
| OnPublished | クライアントがサブスクライバーにメッセージをパブリッシュし終わった場合。 |
|
||||
| OnPublishDropped | あるクライアントが反応に時間がかかった場合等のようにクライアントに到達する前にメッセージが失われた場合に呼ばれる。 |
|
||||
| OnRetainMessage | パブリッシュされたメッセージが保持された場合に呼ばれる。 |
|
||||
| OnRetainPublished | 保持されたメッセージがクライアントに到達した場合に呼ばれる。 |
|
||||
| OnQosPublish | QoSが1以上のパケットがサブスクライバーに発行された場合。 |
|
||||
| OnQosComplete | そのメッセージQoSフローが完了した場合に呼ばれる。 |
|
||||
| OnQosDropped | インフライトメッセージが完了前に期限切れになった場合に呼ばれる。 |
|
||||
| OnPacketIDExhausted | クライアントがパケットに割り当てるIDが枯渇した場合に呼ばれる。 |
|
||||
| OnWill | クライアントが切断し、WILLメッセージを発行しようとした場合に呼ばれる。パケットの変更が可能。 |
|
||||
| OnWillSent | LWTメッセージが切断されたクライアントから発行された場合に呼ばれる |
|
||||
| OnClientExpired | クライアントセッションが期限切れで削除するべき場合に呼ばれる。 |
|
||||
| OnRetainedExpired | 保持メッセージが期限切れで削除すべき場合に呼ばれる。 |
|
||||
| StoredClients | クライアントを返す。例えば永続ストレージから。 |
|
||||
| StoredSubscriptions | クライアントのサブスクリプションを返す。例えば永続ストレージから。 |
|
||||
| StoredInflightMessages | インフライトメッセージを返す。例えば永続ストレージから。 |
|
||||
| StoredRetainedMessages | 保持されたメッセージを返す。例えば永続ストレージから。 |
|
||||
| StoredSysInfo | システム情報の値を返す。例えば永続ストレージから。 |
|
||||
|
||||
もし永続ストレージフックを作成しようとしているのであれば、すでに存在する永続的なフックを見てインスピレーションとどのようなパターンがあるか見てみてください。もし認証フックを作成しようとしているのであれば、`OnACLCheck`と`OnConnectAuthenticate`が役立つでしょう。
|
||||
|
||||
### Inline Client (v2.4.0+)
|
||||
トピックに対して埋め込まれたコードから直接サブスクライブとパブリッシュできます。そうするには`inline client`機能を使うことができます。インラインクライアント機能はサーバの一部として組み込まれているクライアントでサーバーのオプションとしてEnableにできます。
|
||||
```go
|
||||
server := mqtt.New(&mqtt.Options{
|
||||
InlineClient: true,
|
||||
})
|
||||
```
|
||||
Enableにすると、`server.Publish`, `server.Subscribe`, `server.Unsubscribe`のメソッドを利用できて、ブローカーから直接メッセージを送受信できます。
|
||||
> 実際の使用例は[direct examples](examples/direct/main.go)を見てください。
|
||||
|
||||
#### Inline Publish
|
||||
組み込まれたアプリケーションからメッセージをパブリッシュするには`server.Publish(topic string, payload []byte, retain bool, qos byte) error`メソッドを利用します。
|
||||
|
||||
```go
|
||||
err := server.Publish("direct/publish", []byte("packet scheduled message"), false, 0)
|
||||
```
|
||||
> このケースでのQoSはサブスクライバーに設定できる上限でしか使用されません。これはMQTTv5の仕様に従っています。
|
||||
|
||||
#### Inline Subscribe
|
||||
組み込まれたアプリケーション内部からトピックフィルタをサブスクライブするには、`server.Subscribe(filter string, subscriptionId int, handler InlineSubFn) error`メソッドがコールバックも含めて使用できます。
|
||||
インラインサブスクリプションではQoS0のみが適用されます。もし複数のコールバックを同じフィルタに設定したい場合は、MQTTv5の`subscriptionId`のプロパティがその区別に使用できます。
|
||||
|
||||
```go
|
||||
callbackFn := func(cl *mqtt.Client, sub packets.Subscription, pk packets.Packet) {
|
||||
server.Log.Info("inline client received message from subscription", "client", cl.ID, "subscriptionId", sub.Identifier, "topic", pk.TopicName, "payload", string(pk.Payload))
|
||||
}
|
||||
server.Subscribe("direct/#", 1, callbackFn)
|
||||
```
|
||||
|
||||
#### Inline Unsubscribe
|
||||
インラインクライアントでサブスクリプション解除をしたい場合は、`server.Unsubscribe(filter string, subscriptionId int) error` メソッドで行うことができます。
|
||||
|
||||
```go
|
||||
server.Unsubscribe("direct/#", 1)
|
||||
```
|
||||
|
||||
### Packet Injection
|
||||
もし、より制御したい場合や、特定のMQTTv5のプロパティやその他の値をセットしたい場合は、クライアントからのパブリッシュパケットを自ら作成することができます。この方法は単なるパブリッシュではなく、MQTTパケットをまるで特定のクライアントから受け取ったかのようにランタイムに直接インジェクションすることができます。
|
||||
|
||||
このパケットインジェクションは例えばPING ReqやサブスクリプションなどのどんなMQTTパケットでも使用できます。そしてクライアントの構造体とメソッドはエクスポートされているので、(もし、非常にカスタマイズ性の高い要求がある場合には)まるで接続されたクライアントに代わってパケットをインジェクションすることさえできます。
|
||||
|
||||
たいていの場合は上記のインラインクライアントを使用するのが良いでしょう、それはACLとトピックバリデーションをバイパスできる特権があるからです。これは$SYSトピックにさえパブリッシュできることも意味します。ビルトインのクライアントと同様に振る舞うインラインクライアントを作成できます。
|
||||
|
||||
```go
|
||||
cl := server.NewClient(nil, "local", "inline", true)
|
||||
server.InjectPacket(cl, packets.Packet{
|
||||
FixedHeader: packets.FixedHeader{
|
||||
Type: packets.Publish,
|
||||
},
|
||||
TopicName: "direct/publish",
|
||||
Payload: []byte("scheduled message"),
|
||||
})
|
||||
```
|
||||
|
||||
> MQTTのパケットは正しく構成する必要があり、なので[the test packets catalogue](packets/tpackets.go)と[MQTTv5 Specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html)を参照してください。
|
||||
|
||||
この機能の動作を確認するには[hooks example](examples/hooks/main.go) を見てください。
|
||||
|
||||
|
||||
### Testing
|
||||
#### ユニットテスト
|
||||
それぞれの関数が期待通りの動作をするように考えられてMochi MQTTテストが作成されています。テストを走らせるには:
|
||||
```
|
||||
go run --cover ./...
|
||||
```
|
||||
|
||||
#### Paho相互運用性テスト
|
||||
`examples/paho/main.go`を使用してブローカーを起動し、_interoperability_フォルダの`python3 client_test5.py`のmqttv5とv3のテストを実行することで、[Paho Interoperability Test](https://github.com/eclipse/paho.mqtt.testing/tree/master/interoperability)を確認することができます。
|
||||
|
||||
> pahoスイートには現在は何個かの偽陰性に関わるissueがあるので、`paho/main.go`の例ではいくつかの互換性モードがオンになっていることに注意してください。
|
||||
|
||||
|
||||
|
||||
## ベンチマーク
|
||||
Mochi MQTTのパフォーマンスはMosquitto、EMQX、その他などの有名なブローカーに匹敵します。
|
||||
|
||||
ベンチマークはApple Macbook Air M2上で[MQTT-Stresser](https://github.com/inovex/mqtt-stresser)、セッティングとして`cmd/main.go`のデフォルト設定を使用しています。高スループットと低スループットのバーストを考慮すると、中央値のスコアが最も信頼できます。この値は高いほど良いです。
|
||||
|
||||
> ベンチマークの値は1秒あたりのメッセージ数のスループットのそのものを表しているわけではありません。これは、mqtt-stresserによる固有の計算に依存するものではありますが、すべてのブローカーに渡って一貫性のある値として利用しています。
|
||||
> ベンチマークは一般的なパフォーマンス予測ガイドラインとしてのみ提供されます。比較はそのまま使用したデフォルトの設定値で実行しています。
|
||||
|
||||
`mqtt-stresser -broker tcp://localhost:1883 -num-clients=2 -num-messages=10000`
|
||||
| Broker | publish fastest | median | slowest | receive fastest | median | slowest |
|
||||
| -- | -- | -- | -- | -- | -- | -- |
|
||||
| Mochi v2.2.10 | 124,772 | 125,456 | 124,614 | 314,461 | 313,186 | 311,910 |
|
||||
| [Mosquitto v2.0.15](https://github.com/eclipse/mosquitto) | 155,920 | 155,919 | 155,918 | 185,485 | 185,097 | 184,709 |
|
||||
| [EMQX v5.0.11](https://github.com/emqx/emqx) | 156,945 | 156,257 | 155,568 | 17,918 | 17,783 | 17,649 |
|
||||
| [Rumqtt v0.21.0](https://github.com/bytebeamio/rumqtt) | 112,208 | 108,480 | 104,753 | 135,784 | 126,446 | 117,108 |
|
||||
|
||||
`mqtt-stresser -broker tcp://localhost:1883 -num-clients=10 -num-messages=10000`
|
||||
| Broker | publish fastest | median | slowest | receive fastest | median | slowest |
|
||||
| -- | -- | -- | -- | -- | -- | -- |
|
||||
| Mochi v2.2.10 | 41,825 | 31,663| 23,008 | 144,058 | 65,903 | 37,618 |
|
||||
| Mosquitto v2.0.15 | 42,729 | 38,633 | 29,879 | 23,241 | 19,714 | 18,806 |
|
||||
| EMQX v5.0.11 | 21,553 | 17,418 | 14,356 | 4,257 | 3,980 | 3,756 |
|
||||
| Rumqtt v0.21.0 | 42,213 | 23,153 | 20,814 | 49,465 | 36,626 | 19,283 |
|
||||
|
||||
100万メッセージ試験 (100 万メッセージを一斉にサーバーに送信します):
|
||||
|
||||
`mqtt-stresser -broker tcp://localhost:1883 -num-clients=100 -num-messages=10000`
|
||||
| Broker | publish fastest | median | slowest | receive fastest | median | slowest |
|
||||
| -- | -- | -- | -- | -- | -- | -- |
|
||||
| Mochi v2.2.10 | 13,532 | 4,425 | 2,344 | 52,120 | 7,274 | 2,701 |
|
||||
| Mosquitto v2.0.15 | 3,826 | 3,395 | 3,032 | 1,200 | 1,150 | 1,118 |
|
||||
| EMQX v5.0.11 | 4,086 | 2,432 | 2,274 | 434 | 333 | 311 |
|
||||
| Rumqtt v0.21.0 | 78,972 | 5,047 | 3,804 | 4,286 | 3,249 | 2,027 |
|
||||
|
||||
> EMQXのここでの結果は何が起きているのかわかりませんが、おそらくDockerのそのままの設定が最適ではなかったのでしょう、なので、この結果はソフトウェアのひとつの側面にしか過ぎないと捉えてください。
|
||||
|
||||
|
||||
## Contribution Guidelines
|
||||
コントリビューションとフィードバックは両方とも歓迎していますでバグを報告したり、質問したり、新機能のリクエストをしてください。もしプルリクエストするならば下記のガイドラインに従うようにしてください。
|
||||
- 合理的で可能な限りテストカバレッジを維持してください
|
||||
- なぜPRをしたのかとそのPRの内容について明確にしてください。
|
||||
- 有意義な貢献をした場合はSPDX FileContributorタグをファイルにつけてください。
|
||||
|
||||
[SPDX Annotations](https://spdx.dev)はそのライセンス、著作権表記、コントリビューターについて明確するのために、それぞれのファイルに機械可読な形式で記されています。もし、新しいファイルをレポジトリに追加した場合は、下記のようなSPDXヘッダーを付与していることを確かめてください。
|
||||
|
||||
```go
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2023 mochi-mqtt
|
||||
// SPDX-FileContributor: Your name or alias <optional@email.address>
|
||||
|
||||
package name
|
||||
```
|
||||
|
||||
ファイルにそれぞれのコントリビューターの`SPDX-FileContributor`が追加されていることを確認してください、他のファイルを参考にしてください。あなたのこのプロジェクトへのコントリビュートは価値があり、高く評価されます!
|
||||
|
||||
|
||||
## Stargazers over time 🥰
|
||||
[](https://starchart.cc/mochi-mqtt/server)
|
||||
Mochi MQTTをプロジェクトで使用していますか? [是非私達に教えてください!](https://github.com/mochi-mqtt/server/issues)
|
||||
|
||||
83
README.md
83
README.md
@@ -10,7 +10,7 @@
|
||||
|
||||
</p>
|
||||
|
||||
[English](README.md) | [简体中文](README-CN.md)
|
||||
[English](README.md) | [简体中文](README-CN.md) | [日本語](README-JP.md) | [Translators Wanted!](https://github.com/orgs/mochi-mqtt/discussions/310)
|
||||
|
||||
🎆 **mochi-co/mqtt is now part of the new mochi-mqtt organisation.** [Read about this announcement here.](https://github.com/orgs/mochi-mqtt/discussions/271)
|
||||
|
||||
@@ -45,7 +45,7 @@ MQTT stands for [MQ Telemetry Transport](https://en.wikipedia.org/wiki/MQTT). It
|
||||
- Passes all [Paho Interoperability Tests](https://github.com/eclipse/paho.mqtt.testing/tree/master/interoperability) for MQTT v5 and MQTT v3.
|
||||
- Over a thousand carefully considered unit test scenarios.
|
||||
- TCP, Websocket (including SSL/TLS), and $SYS Dashboard listeners.
|
||||
- Built-in Redis, Badger, and Bolt Persistence using Hooks (but you can also make your own).
|
||||
- Built-in Redis, Badger, Pebble and Bolt Persistence using Hooks (but you can also make your own).
|
||||
- Built-in Rule-based Authentication and ACL Ledger using Hooks (also make your own).
|
||||
|
||||
### Compatibility Notes
|
||||
@@ -60,7 +60,6 @@ Unless it's a critical issue, new releases typically go out over the weekend.
|
||||
- Please [open an issue](https://github.com/mochi-mqtt/server/issues) to request new features or event hooks!
|
||||
- Cluster support.
|
||||
- Enhanced Metrics support.
|
||||
- File-based server configuration (supporting docker).
|
||||
|
||||
## Quick Start
|
||||
### Running the Broker with Go
|
||||
@@ -72,13 +71,54 @@ go build -o mqtt && ./mqtt
|
||||
```
|
||||
|
||||
### Using Docker
|
||||
A simple Dockerfile is provided for running the [cmd/main.go](cmd/main.go) Websocket, TCP, and Stats server:
|
||||
You can now pull and run the [official Mochi MQTT image](https://hub.docker.com/r/mochimqtt/server) from our Docker repo:
|
||||
|
||||
```sh
|
||||
docker pull mochimqtt/server
|
||||
or
|
||||
docker run -v $(pwd)/config.yaml:/config.yaml mochimqtt/server
|
||||
```
|
||||
|
||||
For most use cases, you can use File Based Configuration to configure the server, by specifying a valid `yaml` or `json` config file.
|
||||
|
||||
A simple Dockerfile is provided for running the [cmd/main.go](cmd/main.go) Websocket, TCP, and Stats server, using the `allow-all` auth hook.
|
||||
|
||||
```sh
|
||||
docker build -t mochi:latest .
|
||||
docker run -p 1883:1883 -p 1882:1882 -p 8080:8080 mochi:latest
|
||||
docker run -p 1883:1883 -p 1882:1882 -p 8080:8080 -v $(pwd)/config.yaml:/config.yaml mochi:latest
|
||||
```
|
||||
_More substantial docker support is being discussed [here](https://github.com/orgs/mochi-mqtt/discussions/281#discussion-5544545) and [here](https://github.com/orgs/mochi-mqtt/discussions/209). Please join the discussion if you use Mochi-MQTT in this environment._
|
||||
|
||||
### File Based Configuration
|
||||
You can use File Based Configuration with either the Docker image (described above), or by running the build binary with the `--config=config.yaml` or `--config=config.json` parameter.
|
||||
|
||||
Configuration files provide a convenient mechanism for easily preparing a server with the most common configurations. You can enable and configure built-in hooks and listeners, and specify server options and compatibilities:
|
||||
|
||||
```yaml
|
||||
listeners:
|
||||
- type: "tcp"
|
||||
id: "tcp12"
|
||||
address: ":1883"
|
||||
- type: "ws"
|
||||
id: "ws1"
|
||||
address: ":1882"
|
||||
- type: "sysinfo"
|
||||
id: "stats"
|
||||
address: ":1880"
|
||||
hooks:
|
||||
auth:
|
||||
allow_all: true
|
||||
options:
|
||||
inline_client: true
|
||||
```
|
||||
|
||||
Please review the examples found in [examples/config](examples/config) for all available configuration options.
|
||||
|
||||
There are a few conditions to note:
|
||||
1. If you use file-based configuration, the supported hook types for configuration are currently limited to auth, storage, and debug. Each type of hook can only have one instance.
|
||||
2. You can only use built in hooks with file-based configuration, as the type and configuration structure needs to be known by the server in order for it to be applied.
|
||||
3. You can only use built in listeners, for the reasons above.
|
||||
|
||||
If you need to implement custom hooks or listeners, please do so using the traditional manner indicated in [cmd/main.go](cmd/main.go).
|
||||
|
||||
## Developing with Mochi MQTT
|
||||
### Importing as a package
|
||||
@@ -109,7 +149,7 @@ func main() {
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
// Create a TCP listener on a standard port.
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{ID: "t1", Address: ":1883"})
|
||||
err := server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
@@ -150,13 +190,15 @@ A `*listeners.Config` may be passed to configure TLS.
|
||||
|
||||
Examples of usage can be found in the [examples](examples) folder or [cmd/main.go](cmd/main.go).
|
||||
|
||||
### Server Options and Capabilities
|
||||
|
||||
## Server Options and Capabilities
|
||||
A number of configurable options are available which can be used to alter the behaviour or restrict access to certain features in the server.
|
||||
|
||||
```go
|
||||
server := mqtt.New(&mqtt.Options{
|
||||
Capabilities: mqtt.Capabilities{
|
||||
MaximumSessionExpiryInterval: 3600,
|
||||
MaximumClientWritesPending: 3,
|
||||
Compatibilities: mqtt.Compatibilities{
|
||||
ObscureNotAuthorized: true,
|
||||
},
|
||||
@@ -168,8 +210,13 @@ server := mqtt.New(&mqtt.Options{
|
||||
})
|
||||
```
|
||||
|
||||
Review the mqtt.Options, mqtt.Capabilities, and mqtt.Compatibilities structs for a comprehensive list of options. `ClientNetWriteBufferSize` and `ClientNetReadBufferSize` can be configured to adjust memory usage per client, based on your needs.
|
||||
Review the mqtt.Options, mqtt.Capabilities, and mqtt.Compatibilities structs for a comprehensive list of options. `ClientNetWriteBufferSize` and `ClientNetReadBufferSize` can be configured to adjust memory usage per client, based on your needs. The size of `Capabilities.MaximumClientWritesPending` will affect the memory usage of the server. If the number of IoT devices online at the same time is large, and the set value is very large, even if there is no data transmission, the memory usage of the server will increase a lot. The default value is 1024*8, and this parameter can be adjusted according to the actual situation.
|
||||
|
||||
### Default Configuration Notes
|
||||
|
||||
Some choices were made when deciding the default configuration that need to be mentioned here:
|
||||
|
||||
- By default, the value of `server.Options.Capabilities.MaximumMessageExpiryInterval` is set to 86400 (24 hours), in order to prevent exposing the broker to DOS attacks on hostile networks when using the out-of-the-box configuration (as an infinite expiry would allow an infinite number of retained/inflight messages to accumulate). If you are operating in a trusted environment, or you have capacity for a larger retention period, you may wish to override this (set to `0` for no expiry).
|
||||
|
||||
## Event Hooks
|
||||
A universal event hooks system allows developers to hook into various parts of the server and client life cycle to add and modify functionality of the broker. These universal hooks are used to provide everything from authentication, persistent storage, to debugging tools.
|
||||
@@ -182,6 +229,7 @@ Hooks are stackable - you can add multiple hooks to a server, and they will be r
|
||||
| Access Control | [mochi-mqtt/server/hooks/auth . Auth](hooks/auth/auth.go) | Rule-based access control ledger. |
|
||||
| Persistence | [mochi-mqtt/server/hooks/storage/bolt](hooks/storage/bolt/bolt.go) | Persistent storage using [BoltDB](https://dbdb.io/db/boltdb) (deprecated). |
|
||||
| Persistence | [mochi-mqtt/server/hooks/storage/badger](hooks/storage/badger/badger.go) | Persistent storage using [BadgerDB](https://github.com/dgraph-io/badger). |
|
||||
| Persistence | [mochi-mqtt/server/hooks/storage/pebble](hooks/storage/pebble/pebble.go) | Persistent storage using [PebbleDB](https://github.com/cockroachdb/pebble). |
|
||||
| Persistence | [mochi-mqtt/server/hooks/storage/redis](hooks/storage/redis/redis.go) | Persistent storage using [Redis](https://redis.io). |
|
||||
| Debugging | [mochi-mqtt/server/hooks/debug](hooks/debug/debug.go) | Additional debugging output to visualise packet flow. |
|
||||
|
||||
@@ -276,8 +324,21 @@ if err != nil {
|
||||
```
|
||||
For more information on how the redis hook works, or how to use it, see the [examples/persistence/redis/main.go](examples/persistence/redis/main.go) or [hooks/storage/redis](hooks/storage/redis) code.
|
||||
|
||||
#### Pebble DB
|
||||
There's also a Pebble Db storage hook if you prefer file-based storage. It can be added and configured in much the same way as the other hooks (with somewhat less options).
|
||||
```go
|
||||
err := server.AddHook(new(pebble.Hook), &pebble.Options{
|
||||
Path: pebblePath,
|
||||
Mode: pebble.NoSync,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
For more information on how the pebble hook works, or how to use it, see the [examples/persistence/pebble/main.go](examples/persistence/pebble/main.go) or [hooks/storage/pebble](hooks/storage/pebble) code.
|
||||
|
||||
#### Badger DB
|
||||
There's also a BadgerDB storage hook if you prefer file based storage. It can be added and configured in much the same way as the other hooks (with somewhat less options).
|
||||
Similarly, for file-based storage, there is also a BadgerDB storage hook available. It can be added and configured in much the same way as the other hooks.
|
||||
```go
|
||||
err := server.AddHook(new(badger.Hook), &badger.Options{
|
||||
Path: badgerPath,
|
||||
@@ -339,7 +400,7 @@ The function signatures for all the hooks and `mqtt.Hook` interface can be found
|
||||
If you are building a persistent storage hook, see the existing persistent hooks for inspiration and patterns. If you are building an auth hook, you will need `OnACLCheck` and `OnConnectAuthenticate`.
|
||||
|
||||
### Inline Client (v2.4.0+)
|
||||
It's now possible to subscribe and publish to topics directly from the embedding code, by using the `inline client` feature. The Inline Client is an embedded client which operates as part of the server, and can be enabled in the server options:
|
||||
It's now possible to subscribe and publish to topics directly from the embedding code, by using the `inline client` feature. Currently, the inline client does not support shared subscriptions. The Inline Client is an embedded client which operates as part of the server, and can be enabled in the server options:
|
||||
```go
|
||||
server := mqtt.New(&mqtt.Options{
|
||||
InlineClient: true,
|
||||
|
||||
104
clients.go
104
clients.go
@@ -8,6 +8,7 @@ import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
@@ -21,8 +22,13 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
defaultKeepalive uint16 = 10 // the default connection keepalive value in seconds
|
||||
defaultKeepalive uint16 = 10 // the default connection keepalive value in seconds.
|
||||
defaultClientProtocolVersion byte = 4 // the default mqtt protocol version of connecting clients (if somehow unspecified).
|
||||
minimumKeepalive uint16 = 5 // the minimum recommended keepalive - values under with display a warning.
|
||||
)
|
||||
|
||||
var (
|
||||
ErrMinimumKeepalive = errors.New("client keepalive is below minimum recommended value and may exhibit connection instability")
|
||||
)
|
||||
|
||||
// ReadFn is the function signature for the function used for reading and processing new packets.
|
||||
@@ -107,11 +113,12 @@ type Client struct {
|
||||
|
||||
// ClientConnection contains the connection transport and metadata for the client.
|
||||
type ClientConnection struct {
|
||||
Conn net.Conn // the net.Conn used to establish the connection
|
||||
bconn *bufio.ReadWriter // a buffered net.Conn for reading packets
|
||||
Remote string // the remote address of the client
|
||||
Listener string // listener id of the client
|
||||
Inline bool // if true, the client is the built-in 'inline' embedded client
|
||||
Conn net.Conn // the net.Conn used to establish the connection
|
||||
bconn *bufio.Reader // a buffered net.Conn for reading packets
|
||||
outbuf *bytes.Buffer // a buffer for writing packets
|
||||
Remote string // the remote address of the client
|
||||
Listener string // listener id of the client
|
||||
Inline bool // if true, the client is the built-in 'inline' embedded client
|
||||
}
|
||||
|
||||
// ClientProperties contains the properties which define the client behaviour.
|
||||
@@ -174,11 +181,8 @@ func newClient(c net.Conn, o *ops) *Client {
|
||||
|
||||
if c != nil {
|
||||
cl.Net = ClientConnection{
|
||||
Conn: c,
|
||||
bconn: bufio.NewReadWriter(
|
||||
bufio.NewReaderSize(c, o.options.ClientNetReadBufferSize),
|
||||
bufio.NewWriterSize(c, o.options.ClientNetWriteBufferSize),
|
||||
),
|
||||
Conn: c,
|
||||
bconn: bufio.NewReaderSize(c, o.options.ClientNetReadBufferSize),
|
||||
Remote: c.RemoteAddr().String(),
|
||||
}
|
||||
}
|
||||
@@ -211,6 +215,19 @@ func (cl *Client) ParseConnect(lid string, pk packets.Packet) {
|
||||
cl.Properties.Clean = pk.Connect.Clean
|
||||
cl.Properties.Props = pk.Properties.Copy(false)
|
||||
|
||||
if cl.Properties.Props.ReceiveMaximum > cl.ops.options.Capabilities.MaximumInflight { // 3.3.4 Non-normative
|
||||
cl.Properties.Props.ReceiveMaximum = cl.ops.options.Capabilities.MaximumInflight
|
||||
}
|
||||
|
||||
if pk.Connect.Keepalive <= minimumKeepalive {
|
||||
cl.ops.log.Warn(
|
||||
ErrMinimumKeepalive.Error(),
|
||||
"client", cl.ID,
|
||||
"keepalive", pk.Connect.Keepalive,
|
||||
"recommended", minimumKeepalive,
|
||||
)
|
||||
}
|
||||
|
||||
cl.State.Keepalive = pk.Connect.Keepalive // [MQTT-3.2.2-22]
|
||||
cl.State.Inflight.ResetReceiveQuota(int32(cl.ops.options.Capabilities.ReceiveMaximum)) // server receive max per client
|
||||
cl.State.Inflight.ResetSendQuota(int32(cl.Properties.Props.ReceiveMaximum)) // client receive max
|
||||
@@ -312,10 +329,26 @@ func (cl *Client) ResendInflightMessages(force bool) error {
|
||||
}
|
||||
|
||||
// ClearInflights deletes all inflight messages for the client, e.g. for a disconnected user with a clean session.
|
||||
func (cl *Client) ClearInflights(now, maximumExpiry int64) []uint16 {
|
||||
func (cl *Client) ClearInflights() {
|
||||
for _, tk := range cl.State.Inflight.GetAll(false) {
|
||||
if ok := cl.State.Inflight.Delete(tk.PacketID); ok {
|
||||
cl.ops.hooks.OnQosDropped(cl, tk)
|
||||
atomic.AddInt64(&cl.ops.info.Inflight, -1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ClearExpiredInflights deletes any inflight messages which have expired.
|
||||
func (cl *Client) ClearExpiredInflights(now, maximumExpiry int64) []uint16 {
|
||||
deleted := []uint16{}
|
||||
for _, tk := range cl.State.Inflight.GetAll(false) {
|
||||
if (tk.Expiry > 0 && tk.Expiry < now) || tk.Created+maximumExpiry < now {
|
||||
expired := tk.ProtocolVersion == 5 && tk.Expiry > 0 && tk.Expiry < now // [MQTT-3.3.2-5]
|
||||
|
||||
// If the maximum message expiry interval is set (greater than 0), and the message
|
||||
// retention period exceeds the maximum expiry, the message will be forcibly removed.
|
||||
enforced := maximumExpiry > 0 && now-tk.Created > maximumExpiry
|
||||
|
||||
if expired || enforced {
|
||||
if ok := cl.State.Inflight.Delete(tk.PacketID); ok {
|
||||
cl.ops.hooks.OnQosDropped(cl, tk)
|
||||
atomic.AddInt64(&cl.ops.info.Inflight, -1)
|
||||
@@ -384,6 +417,11 @@ func (cl *Client) StopCause() error {
|
||||
return cl.State.stopCause.Load().(error)
|
||||
}
|
||||
|
||||
// StopTime returns the the time the client disconnected in unix time, else zero.
|
||||
func (cl *Client) StopTime() int64 {
|
||||
return atomic.LoadInt64(&cl.State.disconnected)
|
||||
}
|
||||
|
||||
// Closed returns true if client connection is closed.
|
||||
func (cl *Client) Closed() bool {
|
||||
return cl.State.open == nil || cl.State.open.Err() != nil
|
||||
@@ -553,11 +591,35 @@ func (cl *Client) WritePacket(pk packets.Packet) error {
|
||||
return packets.ErrPacketTooLarge // [MQTT-3.1.2-24] [MQTT-3.1.2-25]
|
||||
}
|
||||
|
||||
nb := net.Buffers{buf.Bytes()}
|
||||
n, err := func() (int64, error) {
|
||||
cl.Lock()
|
||||
defer cl.Unlock()
|
||||
return nb.WriteTo(cl.Net.Conn)
|
||||
if len(cl.State.outbound) == 0 {
|
||||
if cl.Net.outbuf == nil {
|
||||
return buf.WriteTo(cl.Net.Conn)
|
||||
}
|
||||
|
||||
// first write to buffer, then flush buffer
|
||||
n, _ := cl.Net.outbuf.Write(buf.Bytes()) // will always be successful
|
||||
err = cl.flushOutbuf()
|
||||
return int64(n), err
|
||||
}
|
||||
|
||||
// there are more writes in the queue
|
||||
if cl.Net.outbuf == nil {
|
||||
if buf.Len() >= cl.ops.options.ClientNetWriteBufferSize {
|
||||
return buf.WriteTo(cl.Net.Conn)
|
||||
}
|
||||
cl.Net.outbuf = new(bytes.Buffer)
|
||||
}
|
||||
|
||||
n, _ := cl.Net.outbuf.Write(buf.Bytes()) // will always be successful
|
||||
if cl.Net.outbuf.Len() < cl.ops.options.ClientNetWriteBufferSize {
|
||||
return int64(n), nil
|
||||
}
|
||||
|
||||
err = cl.flushOutbuf()
|
||||
return int64(n), err
|
||||
}()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -573,3 +635,15 @@ func (cl *Client) WritePacket(pk packets.Packet) error {
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (cl *Client) flushOutbuf() (err error) {
|
||||
if cl.Net.outbuf == nil {
|
||||
return
|
||||
}
|
||||
|
||||
_, err = cl.Net.outbuf.WriteTo(cl.Net.Conn)
|
||||
if err == nil {
|
||||
cl.Net.outbuf = nil
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
198
clients_test.go
198
clients_test.go
@@ -5,10 +5,14 @@
|
||||
package mqtt
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"log/slog"
|
||||
"net"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -33,6 +37,7 @@ func newTestClient() (cl *Client, r net.Conn, w net.Conn) {
|
||||
options: &Options{
|
||||
Capabilities: &Capabilities{
|
||||
ReceiveMaximum: 10,
|
||||
MaximumInflight: 5,
|
||||
TopicAliasMaximum: 10000,
|
||||
MaximumClientWritesPending: 3,
|
||||
maximumPacketID: 10,
|
||||
@@ -179,6 +184,45 @@ func TestClientParseConnect(t *testing.T) {
|
||||
require.Equal(t, int32(pk.Properties.ReceiveMaximum), cl.State.Inflight.maximumSendQuota)
|
||||
}
|
||||
|
||||
func TestClientParseConnectReceiveMaxExceedMaxInflight(t *testing.T) {
|
||||
const MaxInflight uint16 = 1
|
||||
cl, _, _ := newTestClient()
|
||||
cl.ops.options.Capabilities.MaximumInflight = MaxInflight
|
||||
|
||||
pk := packets.Packet{
|
||||
ProtocolVersion: 4,
|
||||
Connect: packets.ConnectParams{
|
||||
ProtocolName: []byte{'M', 'Q', 'T', 'T'},
|
||||
Clean: true,
|
||||
Keepalive: 60,
|
||||
ClientIdentifier: "mochi",
|
||||
WillFlag: true,
|
||||
WillTopic: "lwt",
|
||||
WillPayload: []byte("lol gg"),
|
||||
WillQos: 1,
|
||||
WillRetain: false,
|
||||
},
|
||||
Properties: packets.Properties{
|
||||
ReceiveMaximum: uint16(5),
|
||||
},
|
||||
}
|
||||
|
||||
cl.ParseConnect("tcp1", pk)
|
||||
require.Equal(t, pk.Connect.ClientIdentifier, cl.ID)
|
||||
require.Equal(t, pk.Connect.Keepalive, cl.State.Keepalive)
|
||||
require.Equal(t, pk.Connect.Clean, cl.Properties.Clean)
|
||||
require.Equal(t, pk.Connect.ClientIdentifier, cl.ID)
|
||||
require.Equal(t, pk.Connect.WillTopic, cl.Properties.Will.TopicName)
|
||||
require.Equal(t, pk.Connect.WillPayload, cl.Properties.Will.Payload)
|
||||
require.Equal(t, pk.Connect.WillQos, cl.Properties.Will.Qos)
|
||||
require.Equal(t, pk.Connect.WillRetain, cl.Properties.Will.Retain)
|
||||
require.Equal(t, uint32(1), cl.Properties.Will.Flag)
|
||||
require.Equal(t, int32(cl.ops.options.Capabilities.ReceiveMaximum), cl.State.Inflight.receiveQuota)
|
||||
require.Equal(t, int32(cl.ops.options.Capabilities.ReceiveMaximum), cl.State.Inflight.maximumReceiveQuota)
|
||||
require.Equal(t, int32(MaxInflight), cl.State.Inflight.sendQuota)
|
||||
require.Equal(t, int32(MaxInflight), cl.State.Inflight.maximumSendQuota)
|
||||
}
|
||||
|
||||
func TestClientParseConnectOverrideWillDelay(t *testing.T) {
|
||||
cl, _, _ := newTestClient()
|
||||
|
||||
@@ -210,6 +254,27 @@ func TestClientParseConnectNoID(t *testing.T) {
|
||||
require.NotEmpty(t, cl.ID)
|
||||
}
|
||||
|
||||
func TestClientParseConnectBelowMinimumKeepalive(t *testing.T) {
|
||||
cl, _, _ := newTestClient()
|
||||
var b bytes.Buffer
|
||||
x := bufio.NewWriter(&b)
|
||||
cl.ops.log = slog.New(slog.NewTextHandler(x, nil))
|
||||
|
||||
pk := packets.Packet{
|
||||
ProtocolVersion: 4,
|
||||
Connect: packets.ConnectParams{
|
||||
ProtocolName: []byte{'M', 'Q', 'T', 'T'},
|
||||
Keepalive: minimumKeepalive - 1,
|
||||
ClientIdentifier: "mochi",
|
||||
},
|
||||
}
|
||||
cl.ParseConnect("tcp1", pk)
|
||||
err := x.Flush()
|
||||
require.NoError(t, err)
|
||||
require.True(t, strings.Contains(b.String(), ErrMinimumKeepalive.Error()))
|
||||
require.NotEmpty(t, cl.ID)
|
||||
}
|
||||
|
||||
func TestClientNextPacketID(t *testing.T) {
|
||||
cl, _, _ := newTestClient()
|
||||
|
||||
@@ -277,19 +342,56 @@ func TestClientNextPacketIDOverflow(t *testing.T) {
|
||||
|
||||
func TestClientClearInflights(t *testing.T) {
|
||||
cl, _, _ := newTestClient()
|
||||
n := time.Now().Unix()
|
||||
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 1, Expiry: n - 1})
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 2, Expiry: n - 2})
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 3, Created: n - 3}) // within bounds
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 5, Created: n - 5}) // over max server expiry limit
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 7, Created: n})
|
||||
|
||||
require.Equal(t, 5, cl.State.Inflight.Len())
|
||||
cl.ClearInflights()
|
||||
require.Equal(t, 0, cl.State.Inflight.Len())
|
||||
}
|
||||
|
||||
func TestClientClearExpiredInflights(t *testing.T) {
|
||||
cl, _, _ := newTestClient()
|
||||
|
||||
n := time.Now().Unix()
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 1, Expiry: n - 1})
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 2, Expiry: n - 2})
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 3, Created: n - 3}) // within bounds
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 5, Created: n - 5}) // over max server expiry limit
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 7, Created: n})
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 1, Expiry: n - 1})
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 2, Expiry: n - 2})
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 3, Created: n - 3}) // within bounds
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 5, Created: n - 5}) // over max server expiry limit
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 7, Created: n})
|
||||
require.Equal(t, 5, cl.State.Inflight.Len())
|
||||
|
||||
deleted := cl.ClearInflights(n, 4)
|
||||
deleted := cl.ClearExpiredInflights(n, 4)
|
||||
require.Len(t, deleted, 3)
|
||||
require.ElementsMatch(t, []uint16{1, 2, 5}, deleted)
|
||||
require.Equal(t, 2, cl.State.Inflight.Len())
|
||||
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 11, Expiry: n - 1})
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 12, Expiry: n - 2}) // expiry is ineffective for v3.
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 13, Created: n - 3}) // within bounds for v3
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 15, Created: n - 5}) // over max server expiry limit
|
||||
require.Equal(t, 6, cl.State.Inflight.Len())
|
||||
|
||||
deleted = cl.ClearExpiredInflights(n, 4)
|
||||
require.Len(t, deleted, 3)
|
||||
require.ElementsMatch(t, []uint16{11, 12, 15}, deleted)
|
||||
require.Equal(t, 3, cl.State.Inflight.Len())
|
||||
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 17, Created: n - 1})
|
||||
deleted = cl.ClearExpiredInflights(n, 0) // maximumExpiry = 0 do not process abandon messages
|
||||
require.Len(t, deleted, 0)
|
||||
require.Equal(t, 4, cl.State.Inflight.Len())
|
||||
|
||||
cl.State.Inflight.Set(packets.Packet{ProtocolVersion: 5, PacketID: 18, Expiry: n - 1})
|
||||
deleted = cl.ClearExpiredInflights(n, 0) // maximumExpiry = 0 do not abandon messages
|
||||
require.ElementsMatch(t, []uint16{18}, deleted) // expiry is still effective for v5.
|
||||
require.Len(t, deleted, 1)
|
||||
require.Equal(t, 4, cl.State.Inflight.Len())
|
||||
}
|
||||
|
||||
func TestClientResendInflightMessages(t *testing.T) {
|
||||
@@ -481,9 +583,11 @@ func TestClientReadDone(t *testing.T) {
|
||||
|
||||
func TestClientStop(t *testing.T) {
|
||||
cl, _, _ := newTestClient()
|
||||
require.Equal(t, int64(0), cl.StopTime())
|
||||
cl.Stop(nil)
|
||||
require.Equal(t, nil, cl.State.stopCause.Load())
|
||||
require.Equal(t, time.Now().Unix(), cl.State.disconnected)
|
||||
require.InDelta(t, time.Now().Unix(), cl.State.disconnected, 1.0)
|
||||
require.Equal(t, cl.State.disconnected, cl.StopTime())
|
||||
require.True(t, cl.Closed())
|
||||
require.Equal(t, nil, cl.StopCause())
|
||||
}
|
||||
@@ -647,6 +751,86 @@ func TestClientWritePacket(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestClientWritePacketBuffer(t *testing.T) {
|
||||
r, w := net.Pipe()
|
||||
|
||||
cl := newClient(w, &ops{
|
||||
info: new(system.Info),
|
||||
hooks: new(Hooks),
|
||||
log: logger,
|
||||
options: &Options{
|
||||
Capabilities: &Capabilities{
|
||||
ReceiveMaximum: 10,
|
||||
TopicAliasMaximum: 10000,
|
||||
MaximumClientWritesPending: 3,
|
||||
maximumPacketID: 10,
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
cl.ID = "mochi"
|
||||
cl.State.Inflight.maximumSendQuota = 5
|
||||
cl.State.Inflight.sendQuota = 5
|
||||
cl.State.Inflight.maximumReceiveQuota = 10
|
||||
cl.State.Inflight.receiveQuota = 10
|
||||
cl.Properties.Props.TopicAliasMaximum = 0
|
||||
cl.Properties.Props.RequestResponseInfo = 0x1
|
||||
|
||||
cl.ops.options.ClientNetWriteBufferSize = 10
|
||||
defer cl.Stop(errClientStop)
|
||||
|
||||
small := packets.TPacketData[packets.Publish].Get(packets.TPublishNoPayload).Packet
|
||||
large := packets.TPacketData[packets.Publish].Get(packets.TPublishBasic).Packet
|
||||
|
||||
cl.State.outbound <- small
|
||||
|
||||
tt := []struct {
|
||||
pks []*packets.Packet
|
||||
size int
|
||||
}{
|
||||
{
|
||||
pks: []*packets.Packet{small, small},
|
||||
size: 18,
|
||||
},
|
||||
{
|
||||
pks: []*packets.Packet{large},
|
||||
size: 20,
|
||||
},
|
||||
{
|
||||
pks: []*packets.Packet{small},
|
||||
size: 0,
|
||||
},
|
||||
}
|
||||
|
||||
go func() {
|
||||
for i, tx := range tt {
|
||||
for _, pk := range tx.pks {
|
||||
cl.Properties.ProtocolVersion = pk.ProtocolVersion
|
||||
err := cl.WritePacket(*pk)
|
||||
require.NoError(t, err, "index: %d", i)
|
||||
if i == len(tt)-1 {
|
||||
cl.Net.Conn.Close()
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
var n int
|
||||
var err error
|
||||
for i, tx := range tt {
|
||||
buf := make([]byte, 100)
|
||||
if i == len(tt)-1 {
|
||||
buf, err = io.ReadAll(r)
|
||||
n = len(buf)
|
||||
} else {
|
||||
n, err = io.ReadAtLeast(r, buf, 1)
|
||||
}
|
||||
require.NoError(t, err, "index: %d", i)
|
||||
require.Equal(t, tx.size, n, "index: %d", i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteClientOversizePacket(t *testing.T) {
|
||||
cl, _, _ := newTestClient()
|
||||
cl.Properties.Props.MaximumPacketSize = 2
|
||||
|
||||
56
cmd/docker/main.go
Normal file
56
cmd/docker/main.go
Normal file
@@ -0,0 +1,56 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2023 mochi-mqtt
|
||||
// SPDX-FileContributor: dgduncan, mochi-co
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"github.com/mochi-mqtt/server/v2/config"
|
||||
"log"
|
||||
"log/slog"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
)
|
||||
|
||||
func main() {
|
||||
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stdout, nil))) // set basic logger to ensure logs before configuration are in a consistent format
|
||||
|
||||
configFile := flag.String("config", "config.yaml", "path to mochi config yaml or json file")
|
||||
flag.Parse()
|
||||
|
||||
sigs := make(chan os.Signal, 1)
|
||||
done := make(chan bool, 1)
|
||||
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
|
||||
go func() {
|
||||
<-sigs
|
||||
done <- true
|
||||
}()
|
||||
|
||||
configBytes, err := os.ReadFile(*configFile)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
options, err := config.FromBytes(configBytes)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
server := mqtt.New(options)
|
||||
|
||||
go func() {
|
||||
err := server.Serve()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}()
|
||||
|
||||
<-done
|
||||
server.Log.Warn("caught signal, stopping...")
|
||||
_ = server.Close()
|
||||
server.Log.Info("mochi mqtt shutdown complete")
|
||||
}
|
||||
21
cmd/main.go
21
cmd/main.go
@@ -33,19 +33,31 @@ func main() {
|
||||
server := mqtt.New(nil)
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
tcp := listeners.NewTCP("t1", *tcpAddr, nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: *tcpAddr,
|
||||
})
|
||||
err := server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
ws := listeners.NewWebsocket("ws1", *wsAddr, nil)
|
||||
ws := listeners.NewWebsocket(listeners.Config{
|
||||
ID: "ws1",
|
||||
Address: *wsAddr,
|
||||
})
|
||||
err = server.AddListener(ws)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
stats := listeners.NewHTTPStats("stats", *infoAddr, nil, server.Info)
|
||||
stats := listeners.NewHTTPStats(
|
||||
listeners.Config{
|
||||
ID: "info",
|
||||
Address: *infoAddr,
|
||||
},
|
||||
server.Info,
|
||||
)
|
||||
err = server.AddListener(stats)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
@@ -61,6 +73,5 @@ func main() {
|
||||
<-done
|
||||
server.Log.Warn("caught signal, stopping...")
|
||||
_ = server.Close()
|
||||
server.Log.Info("main.go finished")
|
||||
|
||||
server.Log.Info("mochi mqtt shutdown complete")
|
||||
}
|
||||
|
||||
15
config.yaml
Normal file
15
config.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
listeners:
|
||||
- type: "tcp"
|
||||
id: "tcp1"
|
||||
address: ":1883"
|
||||
- type: "ws"
|
||||
id: "ws1"
|
||||
address: ":1882"
|
||||
- type: "sysinfo"
|
||||
id: "stats"
|
||||
address: ":1880"
|
||||
hooks:
|
||||
auth:
|
||||
allow_all: true
|
||||
options:
|
||||
inline_client: true
|
||||
175
config/config.go
Normal file
175
config/config.go
Normal file
@@ -0,0 +1,175 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2023 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"log/slog"
|
||||
"os"
|
||||
|
||||
"github.com/mochi-mqtt/server/v2/hooks/auth"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/debug"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/badger"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/bolt"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/pebble"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/redis"
|
||||
"github.com/mochi-mqtt/server/v2/listeners"
|
||||
"gopkg.in/yaml.v3"
|
||||
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
)
|
||||
|
||||
// config defines the structure of configuration data to be parsed from a config source.
|
||||
type config struct {
|
||||
Options mqtt.Options
|
||||
Listeners []listeners.Config `yaml:"listeners" json:"listeners"`
|
||||
HookConfigs HookConfigs `yaml:"hooks" json:"hooks"`
|
||||
LoggingConfig LoggingConfig `yaml:"logging" json:"logging"`
|
||||
}
|
||||
|
||||
type LoggingConfig struct {
|
||||
Level string
|
||||
}
|
||||
|
||||
func (lc LoggingConfig) ToLogger() *slog.Logger {
|
||||
var level slog.Level
|
||||
if err := level.UnmarshalText([]byte(lc.Level)); err != nil {
|
||||
level = slog.LevelInfo
|
||||
}
|
||||
|
||||
leveler := new(slog.LevelVar)
|
||||
leveler.Set(level)
|
||||
return slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
|
||||
Level: leveler,
|
||||
}))
|
||||
}
|
||||
|
||||
// HookConfigs contains configurations to enable individual hooks.
|
||||
type HookConfigs struct {
|
||||
Auth *HookAuthConfig `yaml:"auth" json:"auth"`
|
||||
Storage *HookStorageConfig `yaml:"storage" json:"storage"`
|
||||
Debug *debug.Options `yaml:"debug" json:"debug"`
|
||||
}
|
||||
|
||||
// HookAuthConfig contains configurations for the auth hook.
|
||||
type HookAuthConfig struct {
|
||||
Ledger auth.Ledger `yaml:"ledger" json:"ledger"`
|
||||
AllowAll bool `yaml:"allow_all" json:"allow_all"`
|
||||
}
|
||||
|
||||
// HookStorageConfig contains configurations for the different storage hooks.
|
||||
type HookStorageConfig struct {
|
||||
Badger *badger.Options `yaml:"badger" json:"badger"`
|
||||
Bolt *bolt.Options `yaml:"bolt" json:"bolt"`
|
||||
Pebble *pebble.Options `yaml:"pebble" json:"pebble"`
|
||||
Redis *redis.Options `yaml:"redis" json:"redis"`
|
||||
}
|
||||
|
||||
// ToHooks converts Hook file configurations into Hooks to be added to the server.
|
||||
func (hc HookConfigs) ToHooks() []mqtt.HookLoadConfig {
|
||||
var hlc []mqtt.HookLoadConfig
|
||||
|
||||
if hc.Auth != nil {
|
||||
hlc = append(hlc, hc.toHooksAuth()...)
|
||||
}
|
||||
|
||||
if hc.Storage != nil {
|
||||
hlc = append(hlc, hc.toHooksStorage()...)
|
||||
}
|
||||
|
||||
if hc.Debug != nil {
|
||||
hlc = append(hlc, mqtt.HookLoadConfig{
|
||||
Hook: new(debug.Hook),
|
||||
Config: hc.Debug,
|
||||
})
|
||||
}
|
||||
|
||||
return hlc
|
||||
}
|
||||
|
||||
// toHooksAuth converts auth hook configurations into auth hooks.
|
||||
func (hc HookConfigs) toHooksAuth() []mqtt.HookLoadConfig {
|
||||
var hlc []mqtt.HookLoadConfig
|
||||
if hc.Auth.AllowAll {
|
||||
hlc = append(hlc, mqtt.HookLoadConfig{
|
||||
Hook: new(auth.AllowHook),
|
||||
})
|
||||
} else {
|
||||
hlc = append(hlc, mqtt.HookLoadConfig{
|
||||
Hook: new(auth.Hook),
|
||||
Config: &auth.Options{
|
||||
Ledger: &auth.Ledger{ // avoid copying sync.Locker
|
||||
Users: hc.Auth.Ledger.Users,
|
||||
Auth: hc.Auth.Ledger.Auth,
|
||||
ACL: hc.Auth.Ledger.ACL,
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
return hlc
|
||||
}
|
||||
|
||||
// toHooksAuth converts storage hook configurations into storage hooks.
|
||||
func (hc HookConfigs) toHooksStorage() []mqtt.HookLoadConfig {
|
||||
var hlc []mqtt.HookLoadConfig
|
||||
if hc.Storage.Badger != nil {
|
||||
hlc = append(hlc, mqtt.HookLoadConfig{
|
||||
Hook: new(badger.Hook),
|
||||
Config: hc.Storage.Badger,
|
||||
})
|
||||
}
|
||||
|
||||
if hc.Storage.Bolt != nil {
|
||||
hlc = append(hlc, mqtt.HookLoadConfig{
|
||||
Hook: new(bolt.Hook),
|
||||
Config: hc.Storage.Bolt,
|
||||
})
|
||||
}
|
||||
|
||||
if hc.Storage.Redis != nil {
|
||||
hlc = append(hlc, mqtt.HookLoadConfig{
|
||||
Hook: new(redis.Hook),
|
||||
Config: hc.Storage.Redis,
|
||||
})
|
||||
}
|
||||
|
||||
if hc.Storage.Pebble != nil {
|
||||
hlc = append(hlc, mqtt.HookLoadConfig{
|
||||
Hook: new(pebble.Hook),
|
||||
Config: hc.Storage.Pebble,
|
||||
})
|
||||
}
|
||||
return hlc
|
||||
}
|
||||
|
||||
// FromBytes unmarshals a byte slice of JSON or YAML config data into a valid server options value.
|
||||
// Any hooks configurations are converted into Hooks using the toHooks methods in this package.
|
||||
func FromBytes(b []byte) (*mqtt.Options, error) {
|
||||
c := new(config)
|
||||
o := mqtt.Options{}
|
||||
|
||||
if len(b) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if b[0] == '{' {
|
||||
err := json.Unmarshal(b, c)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
err := yaml.Unmarshal(b, c)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
o = c.Options
|
||||
o.Hooks = c.HookConfigs.ToHooks()
|
||||
o.Listeners = c.Listeners
|
||||
o.Logger = c.LoggingConfig.ToLogger()
|
||||
|
||||
return &o, nil
|
||||
}
|
||||
238
config/config_test.go
Normal file
238
config/config_test.go
Normal file
@@ -0,0 +1,238 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2023 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"log/slog"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/mochi-mqtt/server/v2/hooks/auth"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/badger"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/bolt"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/pebble"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/redis"
|
||||
"github.com/mochi-mqtt/server/v2/listeners"
|
||||
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
)
|
||||
|
||||
var (
|
||||
yamlBytes = []byte(`
|
||||
listeners:
|
||||
- type: "tcp"
|
||||
id: "file-tcp1"
|
||||
address: ":1883"
|
||||
hooks:
|
||||
auth:
|
||||
allow_all: true
|
||||
options:
|
||||
client_net_write_buffer_size: 2048
|
||||
capabilities:
|
||||
minimum_protocol_version: 3
|
||||
compatibilities:
|
||||
restore_sys_info_on_restart: true
|
||||
`)
|
||||
|
||||
jsonBytes = []byte(`{
|
||||
"listeners": [
|
||||
{
|
||||
"type": "tcp",
|
||||
"id": "file-tcp1",
|
||||
"address": ":1883"
|
||||
}
|
||||
],
|
||||
"hooks": {
|
||||
"auth": {
|
||||
"allow_all": true
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"client_net_write_buffer_size": 2048,
|
||||
"capabilities": {
|
||||
"minimum_protocol_version": 3,
|
||||
"compatibilities": {
|
||||
"restore_sys_info_on_restart": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`)
|
||||
parsedOptions = mqtt.Options{
|
||||
Listeners: []listeners.Config{
|
||||
{
|
||||
Type: listeners.TypeTCP,
|
||||
ID: "file-tcp1",
|
||||
Address: ":1883",
|
||||
},
|
||||
},
|
||||
Hooks: []mqtt.HookLoadConfig{
|
||||
{
|
||||
Hook: new(auth.AllowHook),
|
||||
},
|
||||
},
|
||||
ClientNetWriteBufferSize: 2048,
|
||||
Capabilities: &mqtt.Capabilities{
|
||||
MinimumProtocolVersion: 3,
|
||||
Compatibilities: mqtt.Compatibilities{
|
||||
RestoreSysInfoOnRestart: true,
|
||||
},
|
||||
},
|
||||
Logger: slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
|
||||
Level: new(slog.LevelVar),
|
||||
})),
|
||||
}
|
||||
)
|
||||
|
||||
func TestFromBytesEmptyL(t *testing.T) {
|
||||
_, err := FromBytes([]byte{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestFromBytesYAML(t *testing.T) {
|
||||
o, err := FromBytes(yamlBytes)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, parsedOptions, *o)
|
||||
}
|
||||
|
||||
func TestFromBytesYAMLError(t *testing.T) {
|
||||
_, err := FromBytes(append(yamlBytes, 'a'))
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestFromBytesJSON(t *testing.T) {
|
||||
o, err := FromBytes(jsonBytes)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, parsedOptions, *o)
|
||||
}
|
||||
|
||||
func TestFromBytesJSONError(t *testing.T) {
|
||||
_, err := FromBytes(append(jsonBytes, 'a'))
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestToHooksAuthAllowAll(t *testing.T) {
|
||||
hc := HookConfigs{
|
||||
Auth: &HookAuthConfig{
|
||||
AllowAll: true,
|
||||
},
|
||||
}
|
||||
|
||||
th := hc.toHooksAuth()
|
||||
expect := []mqtt.HookLoadConfig{
|
||||
{Hook: new(auth.AllowHook)},
|
||||
}
|
||||
require.Equal(t, expect, th)
|
||||
}
|
||||
|
||||
func TestToHooksAuthAllowLedger(t *testing.T) {
|
||||
hc := HookConfigs{
|
||||
Auth: &HookAuthConfig{
|
||||
Ledger: auth.Ledger{
|
||||
Auth: auth.AuthRules{
|
||||
{Username: "peach", Password: "password1", Allow: true},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
th := hc.toHooksAuth()
|
||||
expect := []mqtt.HookLoadConfig{
|
||||
{
|
||||
Hook: new(auth.Hook),
|
||||
Config: &auth.Options{
|
||||
Ledger: &auth.Ledger{ // avoid copying sync.Locker
|
||||
Auth: auth.AuthRules{
|
||||
{Username: "peach", Password: "password1", Allow: true},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
require.Equal(t, expect, th)
|
||||
}
|
||||
|
||||
func TestToHooksStorageBadger(t *testing.T) {
|
||||
hc := HookConfigs{
|
||||
Storage: &HookStorageConfig{
|
||||
Badger: &badger.Options{
|
||||
Path: "badger",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
th := hc.toHooksStorage()
|
||||
expect := []mqtt.HookLoadConfig{
|
||||
{
|
||||
Hook: new(badger.Hook),
|
||||
Config: hc.Storage.Badger,
|
||||
},
|
||||
}
|
||||
|
||||
require.Equal(t, expect, th)
|
||||
}
|
||||
|
||||
func TestToHooksStorageBolt(t *testing.T) {
|
||||
hc := HookConfigs{
|
||||
Storage: &HookStorageConfig{
|
||||
Bolt: &bolt.Options{
|
||||
Path: "bolt",
|
||||
Bucket: "mochi",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
th := hc.toHooksStorage()
|
||||
expect := []mqtt.HookLoadConfig{
|
||||
{
|
||||
Hook: new(bolt.Hook),
|
||||
Config: hc.Storage.Bolt,
|
||||
},
|
||||
}
|
||||
|
||||
require.Equal(t, expect, th)
|
||||
}
|
||||
|
||||
func TestToHooksStorageRedis(t *testing.T) {
|
||||
hc := HookConfigs{
|
||||
Storage: &HookStorageConfig{
|
||||
Redis: &redis.Options{
|
||||
Username: "test",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
th := hc.toHooksStorage()
|
||||
expect := []mqtt.HookLoadConfig{
|
||||
{
|
||||
Hook: new(redis.Hook),
|
||||
Config: hc.Storage.Redis,
|
||||
},
|
||||
}
|
||||
|
||||
require.Equal(t, expect, th)
|
||||
}
|
||||
|
||||
func TestToHooksStoragePebble(t *testing.T) {
|
||||
hc := HookConfigs{
|
||||
Storage: &HookStorageConfig{
|
||||
Pebble: &pebble.Options{
|
||||
Path: "pebble",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
th := hc.toHooksStorage()
|
||||
expect := []mqtt.HookLoadConfig{
|
||||
{
|
||||
Hook: new(pebble.Hook),
|
||||
Config: hc.Storage.Pebble,
|
||||
},
|
||||
}
|
||||
|
||||
require.Equal(t, expect, th)
|
||||
}
|
||||
@@ -63,7 +63,10 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -45,7 +45,10 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -32,7 +32,10 @@ func main() {
|
||||
server.Options.Capabilities.MaximumClientWritesPending = 16 * 1024
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
tcp := listeners.NewTCP("t1", *tcpAddr, nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: *tcpAddr,
|
||||
})
|
||||
err := server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
97
examples/config/config.json
Normal file
97
examples/config/config.json
Normal file
@@ -0,0 +1,97 @@
|
||||
{
|
||||
"listeners": [
|
||||
{
|
||||
"type": "tcp",
|
||||
"id": "file-tcp1",
|
||||
"address": ":1883"
|
||||
},
|
||||
{
|
||||
"type": "ws",
|
||||
"id": "file-websocket",
|
||||
"address": ":1882"
|
||||
},
|
||||
{
|
||||
"type": "healthcheck",
|
||||
"id": "file-healthcheck",
|
||||
"address": ":1880"
|
||||
}
|
||||
],
|
||||
"hooks": {
|
||||
"debug": {
|
||||
"enable": true
|
||||
},
|
||||
"storage": {
|
||||
"pebble": {
|
||||
"path": "pebble.db",
|
||||
"mode": "NoSync"
|
||||
},
|
||||
"badger": {
|
||||
"path": "badger.db",
|
||||
"gc_interval": 3,
|
||||
"gc_discard_ratio": 0.5
|
||||
},
|
||||
"bolt": {
|
||||
"path": "bolt.db",
|
||||
"bucket": "mochi"
|
||||
},
|
||||
"redis": {
|
||||
"h_prefix": "mc",
|
||||
"username": "mochi",
|
||||
"password": "melon",
|
||||
"address": "localhost:6379",
|
||||
"database": 1
|
||||
}
|
||||
},
|
||||
"auth": {
|
||||
"allow_all": false,
|
||||
"ledger": {
|
||||
"auth": [
|
||||
{
|
||||
"username": "peach",
|
||||
"password": "password1",
|
||||
"allow": true
|
||||
}
|
||||
],
|
||||
"acl": [
|
||||
{
|
||||
"remote": "127.0.0.1:*"
|
||||
},
|
||||
{
|
||||
"username": "melon",
|
||||
"filters": null,
|
||||
"melon/#": 3,
|
||||
"updates/#": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"options": {
|
||||
"client_net_write_buffer_size": 2048,
|
||||
"client_net_read_buffer_size": 2048,
|
||||
"sys_topic_resend_interval": 10,
|
||||
"inline_client": true,
|
||||
"capabilities": {
|
||||
"maximum_message_expiry_interval": 100,
|
||||
"maximum_client_writes_pending": 8192,
|
||||
"maximum_session_expiry_interval": 86400,
|
||||
"maximum_packet_size": 0,
|
||||
"receive_maximum": 1024,
|
||||
"maximum_inflight": 8192,
|
||||
"topic_alias_maximum": 65535,
|
||||
"shared_sub_available": 1,
|
||||
"minimum_protocol_version": 3,
|
||||
"maximum_qos": 2,
|
||||
"retain_available": 1,
|
||||
"wildcard_sub_available": 1,
|
||||
"sub_id_available": 1,
|
||||
"compatibilities": {
|
||||
"obscure_not_authorized": true,
|
||||
"passive_client_disconnect": false,
|
||||
"always_return_response_info": false,
|
||||
"restore_sys_info_on_restart": false,
|
||||
"no_inherited_properties_on_ack": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
70
examples/config/config.yaml
Normal file
70
examples/config/config.yaml
Normal file
@@ -0,0 +1,70 @@
|
||||
listeners:
|
||||
- type: "tcp"
|
||||
id: "file-tcp1"
|
||||
address: ":1883"
|
||||
- type: "ws"
|
||||
id: "file-websocket"
|
||||
address: ":1882"
|
||||
- type: "healthcheck"
|
||||
id: "file-healthcheck"
|
||||
address: ":1880"
|
||||
hooks:
|
||||
debug:
|
||||
enable: true
|
||||
storage:
|
||||
badger:
|
||||
path: badger.db
|
||||
gc_interval: 3
|
||||
gc_discard_ratio: 0.5
|
||||
pebble:
|
||||
path: pebble.db
|
||||
mode: "NoSync"
|
||||
bolt:
|
||||
path: bolt.db
|
||||
bucket: "mochi"
|
||||
redis:
|
||||
h_prefix: "mc"
|
||||
username: "mochi"
|
||||
password: "melon"
|
||||
address: "localhost:6379"
|
||||
database: 1
|
||||
auth:
|
||||
allow_all: false
|
||||
ledger:
|
||||
auth:
|
||||
- username: peach
|
||||
password: password1
|
||||
allow: true
|
||||
acl:
|
||||
- remote: 127.0.0.1:*
|
||||
- username: melon
|
||||
filters:
|
||||
melon/#: 3
|
||||
updates/#: 2
|
||||
options:
|
||||
client_net_write_buffer_size: 2048
|
||||
client_net_read_buffer_size: 2048
|
||||
sys_topic_resend_interval: 10
|
||||
inline_client: true
|
||||
capabilities:
|
||||
maximum_message_expiry_interval: 100
|
||||
maximum_client_writes_pending: 8192
|
||||
maximum_session_expiry_interval: 86400
|
||||
maximum_packet_size: 0
|
||||
receive_maximum: 1024
|
||||
maximum_inflight: 8192
|
||||
topic_alias_maximum: 65535
|
||||
shared_sub_available: 1
|
||||
minimum_protocol_version: 3
|
||||
maximum_qos: 2
|
||||
retain_available: 1
|
||||
wildcard_sub_available: 1
|
||||
sub_id_available: 1
|
||||
compatibilities:
|
||||
obscure_not_authorized: true
|
||||
passive_client_disconnect: false
|
||||
always_return_response_info: false
|
||||
restore_sys_info_on_restart: false
|
||||
no_inherited_properties_on_ack: false
|
||||
logging:
|
||||
level: INFO
|
||||
49
examples/config/main.go
Normal file
49
examples/config/main.go
Normal file
@@ -0,0 +1,49 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2023 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/mochi-mqtt/server/v2/config"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
)
|
||||
|
||||
func main() {
|
||||
sigs := make(chan os.Signal, 1)
|
||||
done := make(chan bool, 1)
|
||||
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
|
||||
go func() {
|
||||
<-sigs
|
||||
done <- true
|
||||
}()
|
||||
|
||||
configBytes, err := os.ReadFile("config.json")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
options, err := config.FromBytes(configBytes)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
server := mqtt.New(options)
|
||||
|
||||
go func() {
|
||||
err := server.Serve()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}()
|
||||
|
||||
<-done
|
||||
server.Log.Warn("caught signal, stopping...")
|
||||
_ = server.Close()
|
||||
server.Log.Info("main.go finished")
|
||||
}
|
||||
@@ -46,7 +46,10 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -28,15 +28,25 @@ func main() {
|
||||
done <- true
|
||||
}()
|
||||
|
||||
server := mqtt.New(nil)
|
||||
server := mqtt.New(&mqtt.Options{
|
||||
InlineClient: true, // you must enable inline client to use direct publishing and subscribing.
|
||||
})
|
||||
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err := server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
err = server.AddHook(new(ExampleHook), map[string]any{})
|
||||
// Add custom hook (ExampleHook) to the server
|
||||
err = server.AddHook(new(ExampleHook), &ExampleHookOptions{
|
||||
Server: server,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
@@ -87,8 +97,14 @@ func main() {
|
||||
server.Log.Info("main.go finished")
|
||||
}
|
||||
|
||||
// Options contains configuration settings for the hook.
|
||||
type ExampleHookOptions struct {
|
||||
Server *mqtt.Server
|
||||
}
|
||||
|
||||
type ExampleHook struct {
|
||||
mqtt.HookBase
|
||||
config *ExampleHookOptions
|
||||
}
|
||||
|
||||
func (h *ExampleHook) ID() string {
|
||||
@@ -108,11 +124,34 @@ func (h *ExampleHook) Provides(b byte) bool {
|
||||
|
||||
func (h *ExampleHook) Init(config any) error {
|
||||
h.Log.Info("initialised")
|
||||
if _, ok := config.(*ExampleHookOptions); !ok && config != nil {
|
||||
return mqtt.ErrInvalidConfigType
|
||||
}
|
||||
|
||||
h.config = config.(*ExampleHookOptions)
|
||||
if h.config.Server == nil {
|
||||
return mqtt.ErrInvalidConfigType
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// subscribeCallback handles messages for subscribed topics
|
||||
func (h *ExampleHook) subscribeCallback(cl *mqtt.Client, sub packets.Subscription, pk packets.Packet) {
|
||||
h.Log.Info("hook subscribed message", "client", cl.ID, "topic", pk.TopicName)
|
||||
}
|
||||
|
||||
func (h *ExampleHook) OnConnect(cl *mqtt.Client, pk packets.Packet) error {
|
||||
h.Log.Info("client connected", "client", cl.ID)
|
||||
|
||||
// Example demonstrating how to subscribe to a topic within the hook.
|
||||
h.config.Server.Subscribe("hook/direct/publish", 1, h.subscribeCallback)
|
||||
|
||||
// Example demonstrating how to publish a message within the hook
|
||||
err := h.config.Server.Publish("hook/direct/publish", []byte("packet hook message"), false, 0)
|
||||
if err != nil {
|
||||
h.Log.Error("hook.publish", "error", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -31,7 +31,10 @@ func main() {
|
||||
server.Options.Capabilities.Compatibilities.NoInheritedPropertiesOnAck = true
|
||||
|
||||
_ = server.AddHook(new(pahoAuthHook), nil)
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err := server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
// SPDX-FileContributor: mochi-co, werbenhu
|
||||
|
||||
package main
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
badgerdb "github.com/dgraph-io/badger/v4"
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/auth"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/badger"
|
||||
@@ -31,14 +32,35 @@ func main() {
|
||||
server := mqtt.New(nil)
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
badgerOpts := badgerdb.DefaultOptions(badgerPath) // BadgerDB options. Adjust according to your actual scenario.
|
||||
badgerOpts.ValueLogFileSize = 100 * (1 << 20) // Set the default size of the log file to 100 MB.
|
||||
|
||||
// AddHook adds a BadgerDB hook to the server with the specified options.
|
||||
// GcInterval specifies the interval at which BadgerDB garbage collection process runs.
|
||||
// Refer to https://dgraph.io/docs/badger/get-started/#garbage-collection for more information.
|
||||
err := server.AddHook(new(badger.Hook), &badger.Options{
|
||||
Path: badgerPath,
|
||||
|
||||
// Set the interval for garbage collection. Adjust according to your actual scenario.
|
||||
GcInterval: 5 * 60,
|
||||
|
||||
// GcDiscardRatio specifies the ratio of log discard compared to the maximum possible log discard.
|
||||
// Setting it to a higher value would result in fewer space reclaims, while setting it to a lower value
|
||||
// would result in more space reclaims at the cost of increased activity on the LSM tree.
|
||||
// discardRatio must be in the range (0.0, 1.0), both endpoints excluded, otherwise, it will be set to the default value of 0.5.
|
||||
// Adjust according to your actual scenario.
|
||||
GcDiscardRatio: 0.5,
|
||||
|
||||
Options: &badgerOpts,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
// SPDX-FileContributor: mochi-co, werbenhu
|
||||
|
||||
package main
|
||||
|
||||
@@ -19,6 +19,9 @@ import (
|
||||
)
|
||||
|
||||
func main() {
|
||||
boltPath := ".bolt"
|
||||
defer os.RemoveAll(boltPath) // remove the example db files at the end
|
||||
|
||||
sigs := make(chan os.Signal, 1)
|
||||
done := make(chan bool, 1)
|
||||
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
|
||||
@@ -31,7 +34,7 @@ func main() {
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
err := server.AddHook(new(bolt.Hook), &bolt.Options{
|
||||
Path: "bolt.db",
|
||||
Path: boltPath,
|
||||
Options: &bbolt.Options{
|
||||
Timeout: 500 * time.Millisecond,
|
||||
},
|
||||
@@ -40,7 +43,10 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
62
examples/persistence/pebble/main.go
Normal file
62
examples/persistence/pebble/main.go
Normal file
@@ -0,0 +1,62 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co, werbenhu
|
||||
// SPDX-FileContributor: werbenhu
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/auth"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage/pebble"
|
||||
"github.com/mochi-mqtt/server/v2/listeners"
|
||||
)
|
||||
|
||||
func main() {
|
||||
pebblePath := ".pebble"
|
||||
defer os.RemoveAll(pebblePath) // remove the example pebble files at the end
|
||||
|
||||
sigs := make(chan os.Signal, 1)
|
||||
done := make(chan bool, 1)
|
||||
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
|
||||
go func() {
|
||||
<-sigs
|
||||
done <- true
|
||||
}()
|
||||
|
||||
server := mqtt.New(nil)
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
err := server.AddHook(new(pebble.Hook), &pebble.Options{
|
||||
Path: pebblePath,
|
||||
Mode: pebble.NoSync,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
err := server.Serve()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}()
|
||||
|
||||
<-done
|
||||
server.Log.Warn("caught signal, stopping...")
|
||||
_ = server.Close()
|
||||
server.Log.Info("main.go finished")
|
||||
}
|
||||
@@ -48,7 +48,10 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -38,7 +38,10 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", nil)
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -79,7 +79,9 @@ func main() {
|
||||
server := mqtt.New(nil)
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
tcp := listeners.NewTCP("t1", ":1883", &listeners.Config{
|
||||
tcp := listeners.NewTCP(listeners.Config{
|
||||
ID: "t1",
|
||||
Address: ":1883",
|
||||
TLSConfig: tlsConfig,
|
||||
})
|
||||
err = server.AddListener(tcp)
|
||||
@@ -87,7 +89,9 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
ws := listeners.NewWebsocket("ws1", ":1882", &listeners.Config{
|
||||
ws := listeners.NewWebsocket(listeners.Config{
|
||||
ID: "ws1",
|
||||
Address: ":1882",
|
||||
TLSConfig: tlsConfig,
|
||||
})
|
||||
err = server.AddListener(ws)
|
||||
@@ -95,9 +99,13 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
stats := listeners.NewHTTPStats("stats", ":8080", &listeners.Config{
|
||||
TLSConfig: tlsConfig,
|
||||
}, server.Info)
|
||||
stats := listeners.NewHTTPStats(
|
||||
listeners.Config{
|
||||
ID: "stats",
|
||||
Address: ":8080",
|
||||
TLSConfig: tlsConfig,
|
||||
}, server.Info,
|
||||
)
|
||||
err = server.AddListener(stats)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
@@ -27,7 +27,10 @@ func main() {
|
||||
server := mqtt.New(nil)
|
||||
_ = server.AddHook(new(auth.AllowHook), nil)
|
||||
|
||||
ws := listeners.NewWebsocket("ws1", ":1882", nil)
|
||||
ws := listeners.NewWebsocket(listeners.Config{
|
||||
ID: "ws1",
|
||||
Address: ":1882",
|
||||
})
|
||||
err := server.AddListener(ws)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
|
||||
47
go.mod
47
go.mod
@@ -4,34 +4,53 @@ go 1.21
|
||||
|
||||
require (
|
||||
github.com/alicebob/miniredis/v2 v2.23.0
|
||||
github.com/asdine/storm v2.1.2+incompatible
|
||||
github.com/asdine/storm/v3 v3.2.1
|
||||
github.com/cockroachdb/pebble v1.1.0
|
||||
github.com/dgraph-io/badger/v4 v4.2.0
|
||||
github.com/go-redis/redis/v8 v8.11.5
|
||||
github.com/gorilla/websocket v1.5.0
|
||||
github.com/jinzhu/copier v0.3.5
|
||||
github.com/rs/xid v1.4.0
|
||||
github.com/stretchr/testify v1.7.1
|
||||
github.com/timshannon/badgerhold v1.0.0
|
||||
github.com/stretchr/testify v1.8.1
|
||||
go.etcd.io/bbolt v1.3.5
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 // indirect
|
||||
github.com/DataDog/zstd v1.4.5 // indirect
|
||||
github.com/alicebob/gopher-json v0.0.0-20200520072559-a9ecdc9d1d3a // indirect
|
||||
github.com/cespare/xxhash/v2 v2.1.2 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.2.0 // indirect
|
||||
github.com/cockroachdb/errors v1.11.1 // indirect
|
||||
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b // indirect
|
||||
github.com/cockroachdb/redact v1.1.5 // indirect
|
||||
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/dgraph-io/badger v1.6.0 // indirect
|
||||
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 // indirect
|
||||
github.com/dgraph-io/ristretto v0.1.1 // indirect
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
||||
github.com/dustin/go-humanize v1.0.0 // indirect
|
||||
github.com/golang/protobuf v1.5.0 // indirect
|
||||
github.com/golang/snappy v0.0.3 // indirect
|
||||
github.com/google/go-cmp v0.5.8 // indirect
|
||||
github.com/getsentry/sentry-go v0.18.0 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/glog v1.0.0 // indirect
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/golang/snappy v0.0.4 // indirect
|
||||
github.com/google/flatbuffers v1.12.1 // indirect
|
||||
github.com/klauspost/compress v1.15.15 // indirect
|
||||
github.com/kr/pretty v0.3.1 // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/prometheus/client_golang v1.12.0 // indirect
|
||||
github.com/prometheus/client_model v0.2.1-0.20210607210712-147c58e9608a // indirect
|
||||
github.com/prometheus/common v0.32.1 // indirect
|
||||
github.com/prometheus/procfs v0.7.3 // indirect
|
||||
github.com/rogpeppe/go-internal v1.9.0 // indirect
|
||||
github.com/yuin/gopher-lua v0.0.0-20210529063254-f4c35e4016d9 // indirect
|
||||
golang.org/x/net v0.7.0 // indirect
|
||||
golang.org/x/sys v0.5.0 // indirect
|
||||
google.golang.org/protobuf v1.28.1 // indirect
|
||||
go.opencensus.io v0.22.5 // indirect
|
||||
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df // indirect
|
||||
golang.org/x/net v0.23.0 // indirect
|
||||
golang.org/x/sys v0.18.0 // indirect
|
||||
golang.org/x/text v0.14.0 // indirect
|
||||
google.golang.org/protobuf v1.33.0 // indirect
|
||||
)
|
||||
|
||||
564
go.sum
564
go.sum
@@ -1,139 +1,577 @@
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 h1:cTp8I5+VIoKjsnZuH8vjyaysT/ses3EvZeaV/1UkF2M=
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
|
||||
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
|
||||
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
|
||||
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
|
||||
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
|
||||
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
|
||||
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
|
||||
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
|
||||
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
|
||||
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
|
||||
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
|
||||
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
|
||||
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
||||
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
||||
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
||||
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
|
||||
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
|
||||
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
|
||||
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
|
||||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/DataDog/zstd v1.4.1 h1:3oxKN3wbHibqx897utPC2LTQU4J+IHWWJO+glkAkpFM=
|
||||
github.com/DataDog/zstd v1.4.1/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
|
||||
github.com/Sereal/Sereal v0.0.0-20190618215532-0b8ac451a863 h1:BRrxwOZBolJN4gIwvZMJY1tzqBvQgpaZiQRuIDD40jM=
|
||||
github.com/Sereal/Sereal v0.0.0-20190618215532-0b8ac451a863/go.mod h1:D0JMgToj/WdxCgd30Kc1UcA9E+WdZoJqeVOuYW7iTBM=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/DataDog/zstd v1.4.5 h1:EndNeuB0l9syBZhut0wns3gV1hL8zX8LIu6ZiVHWLIQ=
|
||||
github.com/DataDog/zstd v1.4.5/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
|
||||
github.com/alicebob/gopher-json v0.0.0-20200520072559-a9ecdc9d1d3a h1:HbKu58rmZpUGpz5+4FfNmIU+FmZg2P3Xaj2v2bfNWmk=
|
||||
github.com/alicebob/gopher-json v0.0.0-20200520072559-a9ecdc9d1d3a/go.mod h1:SGnFV6hVsYE877CKEZ6tDNTjaSXYUk6QqoIK6PrAtcc=
|
||||
github.com/alicebob/miniredis/v2 v2.23.0 h1:+lwAJYjvvdIVg6doFHuotFjueJ/7KY10xo/vm3X3Scw=
|
||||
github.com/alicebob/miniredis/v2 v2.23.0/go.mod h1:XNqvJdQJv5mSuVMc0ynneafpnL/zv52acZ6kqeS0t88=
|
||||
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
|
||||
github.com/asdine/storm v2.1.2+incompatible h1:dczuIkyqwY2LrtXPz8ixMrU/OFgZp71kbKTHGrXYt/Q=
|
||||
github.com/asdine/storm v2.1.2+incompatible/go.mod h1:RarYDc9hq1UPLImuiXK3BIWPJLdIygvV3PsInK0FbVQ=
|
||||
github.com/asdine/storm/v3 v3.2.1 h1:I5AqhkPK6nBZ/qJXySdI7ot5BlXSZ7qvDY1zAn5ZJac=
|
||||
github.com/asdine/storm/v3 v3.2.1/go.mod h1:LEpXwGt4pIqrE/XcTvCnZHT5MgZCV6Ub9q7yQzOFWr0=
|
||||
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
|
||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
|
||||
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
|
||||
github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f h1:otljaYPt5hWxV3MUfO5dFPFiOXg9CyG5/kCfayTqsJ4=
|
||||
github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f/go.mod h1:a9RdTaap04u637JoCzcUoIcDmvwSUtcUFtT/C3kJlTU=
|
||||
github.com/cockroachdb/errors v1.11.1 h1:xSEW75zKaKCWzR3OfxXUxgrk/NtT4G1MiOv5lWZazG8=
|
||||
github.com/cockroachdb/errors v1.11.1/go.mod h1:8MUxA3Gi6b25tYlFEBGLf+D8aISL+M4MIpiWMSNRfxw=
|
||||
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b h1:r6VH0faHjZeQy818SGhaone5OnYfxFR/+AzdY3sf5aE=
|
||||
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs=
|
||||
github.com/cockroachdb/pebble v1.1.0 h1:pcFh8CdCIt2kmEpK0OIatq67Ln9uGDYY3d5XnE0LJG4=
|
||||
github.com/cockroachdb/pebble v1.1.0/go.mod h1:sEHm5NOXxyiAoKWhoFxT8xMgd/f3RA6qUqQ1BXKrh2E=
|
||||
github.com/cockroachdb/redact v1.1.5 h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwPJ30=
|
||||
github.com/cockroachdb/redact v1.1.5/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
|
||||
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAKVxetITBuuhv3BI9cMrmStnpT18zmgmTxunpo=
|
||||
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgraph-io/badger v1.6.0 h1:DshxFxZWXUcO0xX476VJC07Xsr6ZCBVRHKZ93Oh7Evo=
|
||||
github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
|
||||
github.com/dgraph-io/badger/v4 v4.2.0 h1:kJrlajbXXL9DFTNuhhu9yCx7JJa4qpYWxtE8BzuWsEs=
|
||||
github.com/dgraph-io/badger/v4 v4.2.0/go.mod h1:qfCqhPoWDFJRx1gp5QwwyGo8xk1lbHUxvK9nK0OGAak=
|
||||
github.com/dgraph-io/ristretto v0.1.1 h1:6CWw5tJNgpegArSHpNHJKldNeq03FQCwYvfMVWajOK8=
|
||||
github.com/dgraph-io/ristretto v0.1.1/go.mod h1:S1GPSBCYCIhmVNfcth17y2zZtQT6wzkzgwUve0VDWWA=
|
||||
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 h1:tdlZCpZ/P9DhczCTSixgIKmwPv6+wP5DGjqLYw5SUiA=
|
||||
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
|
||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
|
||||
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
|
||||
github.com/getsentry/sentry-go v0.18.0 h1:MtBW5H9QgdcJabtZcuJG80BMOwaBpkRDZkxRkNC1sN0=
|
||||
github.com/getsentry/sentry-go v0.18.0/go.mod h1:Kgon4Mby+FJ7ZWHFUAZgVaIa8sxHtnRJRLTXZr51aKQ=
|
||||
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
|
||||
github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
|
||||
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
|
||||
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
|
||||
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
|
||||
github.com/go-redis/redis/v8 v8.11.5 h1:AcZZR7igkdvfVmQTPnu9WE37LRrO/YrBH5zWyjDC0oI=
|
||||
github.com/go-redis/redis/v8 v8.11.5/go.mod h1:gREzHqY1hg6oD9ngVRbLStwAWKhA0FEgq8Jd4h5lpwo=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/glog v1.0.0 h1:nfP3RFugxnNRyKgeWd4oI1nYvXpxrx8ck8ZrcizshdQ=
|
||||
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
|
||||
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.5.0 h1:LUVKkCeviFUMKqHa4tXIIij/lbhnMbP7Fn5wKdKkRh4=
|
||||
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
|
||||
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
|
||||
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
|
||||
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
|
||||
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
||||
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
||||
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/flatbuffers v1.12.1 h1:MVlul7pQNoDzWRLTw5imwYsl+usrS1TXG2H4jg6ImGw=
|
||||
github.com/google/flatbuffers v1.12.1/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
|
||||
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
|
||||
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
|
||||
github.com/jinzhu/copier v0.3.5 h1:GlvfUwHk62RokgqVNvYsku0TATCF7bAHVwEXoBh3iJg=
|
||||
github.com/jinzhu/copier v0.3.5/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
||||
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.15.15 h1:EF27CXIuDsYJ6mmvtBRlEuB2UVOqHG1tAXgZ7yIO+lw=
|
||||
github.com/klauspost/compress v1.15.15/go.mod h1:ZcK2JAFqKOpnBlxcLsJzYfrS9X1akm9fHZNnD9+Vo/4=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
|
||||
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
|
||||
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
|
||||
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
|
||||
github.com/onsi/gomega v1.18.1 h1:M1GfJqGRrBrrGGsbxzV5dqM2U2ApXefZCQpkukxYRLE=
|
||||
github.com/onsi/gomega v1.18.1/go.mod h1:0q+aL8jAiMXy9hbwj2mr5GziHiwhAIQpFmmtT5hitRs=
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
|
||||
github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
|
||||
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
||||
github.com/prometheus/client_golang v1.12.0 h1:C+UIj/QWtmqY13Arb8kwMt5j34/0Z2iKamrJ+ryC0Gg=
|
||||
github.com/prometheus/client_golang v1.12.0/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.2.1-0.20210607210712-147c58e9608a h1:CmF68hwI0XsOQ5UwlBopMi2Ow4Pbg32akc4KIVCOm+Y=
|
||||
github.com/prometheus/client_model v0.2.1-0.20210607210712-147c58e9608a/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
|
||||
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
|
||||
github.com/prometheus/common v0.32.1 h1:hWIdL3N2HoUx3B8j3YN9mWor0qhY/NlEKZEaXxuIRh4=
|
||||
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
||||
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
|
||||
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
|
||||
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
|
||||
github.com/rs/xid v1.4.0 h1:qd7wPTDkN6KQx2VmMBLrpHkiyQwgFXRnkOLacUiaSNY=
|
||||
github.com/rs/xid v1.4.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
|
||||
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
|
||||
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
|
||||
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
|
||||
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/timshannon/badgerhold v1.0.0 h1:LtqnDRVP7294FWRiZCIfQa6Tt0bGmlzbO8c364QC2Y8=
|
||||
github.com/timshannon/badgerhold v1.0.0/go.mod h1:Vv2Jj0PAfzqViEpGvJzLP8PY07x1iXLgKRuLY7bqPOE=
|
||||
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
|
||||
github.com/vmihailenco/msgpack v4.0.4+incompatible h1:dSLoQfGFAo3F6OoNhwUmLwVgaUXK79GlxNBwueZn0xI=
|
||||
github.com/vmihailenco/msgpack v4.0.4+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
|
||||
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/gopher-lua v0.0.0-20210529063254-f4c35e4016d9 h1:k/gmLsJDWwWqbLCur2yWnJzwQEKRcAHXo6seXGuSwWw=
|
||||
github.com/yuin/gopher-lua v0.0.0-20210529063254-f4c35e4016d9/go.mod h1:E1AXubJBdNmFERAOucpDIxNzeGfLzg0mYh+UfMWdChA=
|
||||
go.etcd.io/bbolt v1.3.4/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
|
||||
go.etcd.io/bbolt v1.3.5 h1:XAzx9gjCb0Rxj7EoqcClPD1d5ZBxZJk0jbuoPHenBt0=
|
||||
go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
|
||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.5 h1:dntmOdLpSpHlVqbW5Eay97DelsZHe+55D+xC6i0dDS0=
|
||||
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
|
||||
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
|
||||
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
|
||||
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
|
||||
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME=
|
||||
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
|
||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
|
||||
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
|
||||
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191105084925-a882066a44e0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
|
||||
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
|
||||
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190204203706-41f3e6584952/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20221010170243-090e33056c14/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
|
||||
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
|
||||
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
|
||||
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
|
||||
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
|
||||
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
|
||||
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
|
||||
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
|
||||
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
|
||||
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
||||
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
|
||||
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
|
||||
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
|
||||
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
|
||||
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
|
||||
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
|
||||
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
||||
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
|
||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
|
||||
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
|
||||
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
|
||||
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
|
||||
|
||||
7
hooks.go
7
hooks.go
@@ -62,6 +62,12 @@ var (
|
||||
ErrInvalidConfigType = errors.New("invalid config type provided")
|
||||
)
|
||||
|
||||
// HookLoadConfig contains the hook and configuration as loaded from a configuration (usually file).
|
||||
type HookLoadConfig struct {
|
||||
Hook Hook
|
||||
Config any
|
||||
}
|
||||
|
||||
// Hook provides an interface of handlers for different events which occur
|
||||
// during the lifecycle of the broker.
|
||||
type Hook interface {
|
||||
@@ -70,6 +76,7 @@ type Hook interface {
|
||||
Init(config any) error
|
||||
Stop() error
|
||||
SetOpts(l *slog.Logger, o *HookOptions)
|
||||
|
||||
OnStarted()
|
||||
OnStopped()
|
||||
OnConnectAuthenticate(cl *Client, pk packets.Packet) bool
|
||||
|
||||
@@ -16,9 +16,10 @@ import (
|
||||
|
||||
// Options contains configuration settings for the debug output.
|
||||
type Options struct {
|
||||
ShowPacketData bool // include decoded packet data (default false)
|
||||
ShowPings bool // show ping requests and responses (default false)
|
||||
ShowPasswords bool // show connecting user passwords (default false)
|
||||
Enable bool `yaml:"enable" json:"enable"` // non-zero field for enabling hook using file-based config
|
||||
ShowPacketData bool `yaml:"show_packet_data" json:"show_packet_data"` // include decoded packet data (default false)
|
||||
ShowPings bool `yaml:"show_pings" json:"show_pings"` // show ping requests and responses (default false)
|
||||
ShowPasswords bool `yaml:"show_passwords" json:"show_passwords"` // show connecting user passwords (default false)
|
||||
}
|
||||
|
||||
// Hook is a debugging hook which logs additional low-level information from the server.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co, gsagula
|
||||
// SPDX-FileContributor: mochi-co, gsagula, werbenhu
|
||||
|
||||
package badger
|
||||
|
||||
@@ -9,23 +9,25 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
badgerdb "github.com/dgraph-io/badger/v4"
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage"
|
||||
"github.com/mochi-mqtt/server/v2/packets"
|
||||
"github.com/mochi-mqtt/server/v2/system"
|
||||
|
||||
"github.com/timshannon/badgerhold"
|
||||
)
|
||||
|
||||
const (
|
||||
// defaultDbFile is the default file path for the badger db file.
|
||||
defaultDbFile = ".badger"
|
||||
defaultDbFile = ".badger"
|
||||
defaultGcInterval = 5 * 60 // gc interval in seconds
|
||||
defaultGcDiscardRatio = 0.5
|
||||
)
|
||||
|
||||
// clientKey returns a primary key for a client.
|
||||
func clientKey(cl *mqtt.Client) string {
|
||||
return cl.ID
|
||||
return storage.ClientKey + "_" + cl.ID
|
||||
}
|
||||
|
||||
// subscriptionKey returns a primary key for a subscription.
|
||||
@@ -48,17 +50,30 @@ func sysInfoKey() string {
|
||||
return storage.SysInfoKey
|
||||
}
|
||||
|
||||
// Serializable is an interface for objects that can be serialized and deserialized.
|
||||
type Serializable interface {
|
||||
UnmarshalBinary([]byte) error
|
||||
MarshalBinary() (data []byte, err error)
|
||||
}
|
||||
|
||||
// Options contains configuration settings for the BadgerDB instance.
|
||||
type Options struct {
|
||||
Options *badgerhold.Options
|
||||
Path string
|
||||
Options *badgerdb.Options
|
||||
Path string `yaml:"path" json:"path"`
|
||||
// GcDiscardRatio specifies the ratio of log discard compared to the maximum possible log discard.
|
||||
// Setting it to a higher value would result in fewer space reclaims, while setting it to a lower value
|
||||
// would result in more space reclaims at the cost of increased activity on the LSM tree.
|
||||
// discardRatio must be in the range (0.0, 1.0), both endpoints excluded, otherwise, it will be set to the default value of 0.5.
|
||||
GcDiscardRatio float64 `yaml:"gc_discard_ratio" json:"gc_discard_ratio"`
|
||||
GcInterval int64 `yaml:"gc_interval" json:"gc_interval"`
|
||||
}
|
||||
|
||||
// Hook is a persistent storage hook based using BadgerDB file store as a backend.
|
||||
type Hook struct {
|
||||
mqtt.HookBase
|
||||
config *Options // options for configuring the BadgerDB instance.
|
||||
db *badgerhold.Store // the BadgerDB instance.
|
||||
config *Options // options for configuring the BadgerDB instance.
|
||||
gcTicker *time.Ticker // Ticker for BadgerDB garbage collection.
|
||||
db *badgerdb.DB // the BadgerDB instance.
|
||||
}
|
||||
|
||||
// ID returns the id of the hook.
|
||||
@@ -89,6 +104,21 @@ func (h *Hook) Provides(b byte) bool {
|
||||
}, []byte{b})
|
||||
}
|
||||
|
||||
// GcLoop periodically runs the garbage collection process to reclaim space in the value log files.
|
||||
// It uses a ticker to trigger the garbage collection at regular intervals specified by the configuration.
|
||||
// Refer to: https://dgraph.io/docs/badger/get-started/#garbage-collection
|
||||
func (h *Hook) gcLoop() {
|
||||
for range h.gcTicker.C {
|
||||
again:
|
||||
// Run the garbage collection process with a threshold.
|
||||
// If the process returns nil (success), repeat the process.
|
||||
err := h.db.RunValueLogGC(h.config.GcDiscardRatio)
|
||||
if err == nil {
|
||||
goto again // Retry garbage collection if successful.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Init initializes and connects to the badger instance.
|
||||
func (h *Hook) Init(config any) error {
|
||||
if _, ok := config.(*Options); !ok && config != nil {
|
||||
@@ -96,30 +126,46 @@ func (h *Hook) Init(config any) error {
|
||||
}
|
||||
|
||||
if config == nil {
|
||||
config = new(Options)
|
||||
h.config = new(Options)
|
||||
} else {
|
||||
h.config = config.(*Options)
|
||||
}
|
||||
|
||||
h.config = config.(*Options)
|
||||
if h.config.Path == "" {
|
||||
if len(h.config.Path) == 0 {
|
||||
h.config.Path = defaultDbFile
|
||||
}
|
||||
|
||||
options := badgerhold.DefaultOptions
|
||||
options.Dir = h.config.Path
|
||||
options.ValueDir = h.config.Path
|
||||
options.Logger = h
|
||||
if h.config.GcInterval == 0 {
|
||||
h.config.GcInterval = defaultGcInterval
|
||||
}
|
||||
|
||||
if h.config.GcDiscardRatio <= 0.0 || h.config.GcDiscardRatio >= 1.0 {
|
||||
h.config.GcDiscardRatio = defaultGcDiscardRatio
|
||||
}
|
||||
|
||||
if h.config.Options == nil {
|
||||
defaultOpts := badgerdb.DefaultOptions(h.config.Path)
|
||||
h.config.Options = &defaultOpts
|
||||
}
|
||||
h.config.Options.Logger = h
|
||||
|
||||
var err error
|
||||
h.db, err = badgerhold.Open(options)
|
||||
h.db, err = badgerdb.Open(*h.config.Options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
h.gcTicker = time.NewTicker(time.Duration(h.config.GcInterval) * time.Second)
|
||||
go h.gcLoop()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop closes the badger instance.
|
||||
func (h *Hook) Stop() error {
|
||||
if h.gcTicker != nil {
|
||||
h.gcTicker.Stop()
|
||||
}
|
||||
return h.db.Close()
|
||||
}
|
||||
|
||||
@@ -142,7 +188,7 @@ func (h *Hook) updateClient(cl *mqtt.Client) {
|
||||
|
||||
props := cl.Properties.Props.Copy(false)
|
||||
in := &storage.Client{
|
||||
ID: clientKey(cl),
|
||||
ID: cl.ID,
|
||||
T: storage.ClientKey,
|
||||
Remote: cl.Net.Remote,
|
||||
Listener: cl.Net.Listener,
|
||||
@@ -163,7 +209,7 @@ func (h *Hook) updateClient(cl *mqtt.Client) {
|
||||
Will: storage.ClientWill(cl.Properties.Will),
|
||||
}
|
||||
|
||||
err := h.db.Upsert(in.ID, in)
|
||||
err := h.setKv(clientKey(cl), in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to upsert client data", "error", err, "data", in)
|
||||
}
|
||||
@@ -182,14 +228,11 @@ func (h *Hook) OnDisconnect(cl *mqtt.Client, _ error, expire bool) {
|
||||
return
|
||||
}
|
||||
|
||||
if cl.StopCause() == packets.ErrSessionTakenOver {
|
||||
if errors.Is(cl.StopCause(), packets.ErrSessionTakenOver) {
|
||||
return
|
||||
}
|
||||
|
||||
err := h.db.Delete(clientKey(cl), new(storage.Client))
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete client data", "error", err, "data", clientKey(cl))
|
||||
}
|
||||
_ = h.delKv(clientKey(cl))
|
||||
}
|
||||
|
||||
// OnSubscribed adds one or more client subscriptions to the store.
|
||||
@@ -213,10 +256,7 @@ func (h *Hook) OnSubscribed(cl *mqtt.Client, pk packets.Packet, reasonCodes []by
|
||||
RetainAsPublished: pk.Filters[i].RetainAsPublished,
|
||||
}
|
||||
|
||||
err := h.db.Upsert(in.ID, in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to upsert subscription data", "error", err, "data", in)
|
||||
}
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -228,10 +268,7 @@ func (h *Hook) OnUnsubscribed(cl *mqtt.Client, pk packets.Packet) {
|
||||
}
|
||||
|
||||
for i := 0; i < len(pk.Filters); i++ {
|
||||
err := h.db.Delete(subscriptionKey(cl, pk.Filters[i].Filter), new(storage.Subscription))
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete subscription data", "error", err, "data", subscriptionKey(cl, pk.Filters[i].Filter))
|
||||
}
|
||||
_ = h.delKv(subscriptionKey(cl, pk.Filters[i].Filter))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -243,11 +280,7 @@ func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
}
|
||||
|
||||
if r == -1 {
|
||||
err := h.db.Delete(retainedKey(pk.TopicName), new(storage.Message))
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete retained message data", "error", err, "data", retainedKey(pk.TopicName))
|
||||
}
|
||||
|
||||
_ = h.delKv(retainedKey(pk.TopicName))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -259,6 +292,7 @@ func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
TopicName: pk.TopicName,
|
||||
Payload: pk.Payload,
|
||||
Created: pk.Created,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
Properties: storage.MessageProperties{
|
||||
PayloadFormat: props.PayloadFormat,
|
||||
@@ -272,10 +306,7 @@ func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
},
|
||||
}
|
||||
|
||||
err := h.db.Upsert(in.ID, in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to upsert retained message data", "error", err, "data", in)
|
||||
}
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnQosPublish adds or updates an inflight message in the store.
|
||||
@@ -289,6 +320,7 @@ func (h *Hook) OnQosPublish(cl *mqtt.Client, pk packets.Packet, sent int64, rese
|
||||
in := &storage.Message{
|
||||
ID: inflightKey(cl, pk),
|
||||
T: storage.InflightKey,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
PacketID: pk.PacketID,
|
||||
FixedHeader: pk.FixedHeader,
|
||||
@@ -308,10 +340,7 @@ func (h *Hook) OnQosPublish(cl *mqtt.Client, pk packets.Packet, sent int64, rese
|
||||
},
|
||||
}
|
||||
|
||||
err := h.db.Upsert(in.ID, in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to upsert qos inflight data", "error", err, "data", in)
|
||||
}
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnQosComplete removes a resolved inflight message from the store.
|
||||
@@ -321,10 +350,7 @@ func (h *Hook) OnQosComplete(cl *mqtt.Client, pk packets.Packet) {
|
||||
return
|
||||
}
|
||||
|
||||
err := h.db.Delete(inflightKey(cl, pk), new(storage.Message))
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete inflight message data", "error", err, "data", inflightKey(cl, pk))
|
||||
}
|
||||
_ = h.delKv(inflightKey(cl, pk))
|
||||
}
|
||||
|
||||
// OnQosDropped removes a dropped inflight message from the store.
|
||||
@@ -349,10 +375,7 @@ func (h *Hook) OnSysInfoTick(sys *system.Info) {
|
||||
Info: *sys.Clone(),
|
||||
}
|
||||
|
||||
err := h.db.Upsert(in.ID, in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to upsert $SYS data", "error", err, "data", in)
|
||||
}
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnRetainedExpired deletes expired retained messages from the store.
|
||||
@@ -362,10 +385,7 @@ func (h *Hook) OnRetainedExpired(filter string) {
|
||||
return
|
||||
}
|
||||
|
||||
err := h.db.Delete(retainedKey(filter), new(storage.Message))
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete expired retained message data", "error", err, "id", retainedKey(filter))
|
||||
}
|
||||
_ = h.delKv(retainedKey(filter))
|
||||
}
|
||||
|
||||
// OnClientExpired deleted expired clients from the store.
|
||||
@@ -375,10 +395,7 @@ func (h *Hook) OnClientExpired(cl *mqtt.Client) {
|
||||
return
|
||||
}
|
||||
|
||||
err := h.db.Delete(clientKey(cl), new(storage.Client))
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete expired client data", "error", err, "id", clientKey(cl))
|
||||
}
|
||||
_ = h.delKv(clientKey(cl))
|
||||
}
|
||||
|
||||
// StoredClients returns all stored clients from the store.
|
||||
@@ -388,12 +405,15 @@ func (h *Hook) StoredClients() (v []storage.Client, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
err = h.db.Find(&v, badgerhold.Where("T").Eq(storage.ClientKey))
|
||||
if err != nil && !errors.Is(err, badgerhold.ErrNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
err = h.iterKv(storage.ClientKey, func(value []byte) error {
|
||||
obj := storage.Client{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// StoredSubscriptions returns all stored subscriptions from the store.
|
||||
@@ -403,12 +423,16 @@ func (h *Hook) StoredSubscriptions() (v []storage.Subscription, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
err = h.db.Find(&v, badgerhold.Where("T").Eq(storage.SubscriptionKey))
|
||||
if err != nil && !errors.Is(err, badgerhold.ErrNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
v = make([]storage.Subscription, 0)
|
||||
err = h.iterKv(storage.SubscriptionKey, func(value []byte) error {
|
||||
obj := storage.Subscription{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// StoredRetainedMessages returns all stored retained messages from the store.
|
||||
@@ -418,12 +442,20 @@ func (h *Hook) StoredRetainedMessages() (v []storage.Message, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
err = h.db.Find(&v, badgerhold.Where("T").Eq(storage.RetainedKey))
|
||||
if err != nil && !errors.Is(err, badgerhold.ErrNotFound) {
|
||||
v = make([]storage.Message, 0)
|
||||
err = h.iterKv(storage.RetainedKey, func(value []byte) error {
|
||||
obj := storage.Message{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
|
||||
if err != nil && !errors.Is(err, badgerdb.ErrKeyNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
return
|
||||
}
|
||||
|
||||
// StoredInflightMessages returns all stored inflight messages from the store.
|
||||
@@ -433,12 +465,16 @@ func (h *Hook) StoredInflightMessages() (v []storage.Message, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
err = h.db.Find(&v, badgerhold.Where("T").Eq(storage.InflightKey))
|
||||
if err != nil && !errors.Is(err, badgerhold.ErrNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
v = make([]storage.Message, 0)
|
||||
err = h.iterKv(storage.InflightKey, func(value []byte) error {
|
||||
obj := storage.Message{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// StoredSysInfo returns the system info from the store.
|
||||
@@ -448,31 +484,95 @@ func (h *Hook) StoredSysInfo() (v storage.SystemInfo, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
err = h.db.Get(storage.SysInfoKey, &v)
|
||||
if err != nil && !errors.Is(err, badgerhold.ErrNotFound) {
|
||||
err = h.getKv(storage.SysInfoKey, &v)
|
||||
if err != nil && !errors.Is(err, badgerdb.ErrKeyNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// Errorf satisfies the badger interface for an error logger.
|
||||
func (h *Hook) Errorf(m string, v ...interface{}) {
|
||||
func (h *Hook) Errorf(m string, v ...any) {
|
||||
h.Log.Error(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
|
||||
}
|
||||
|
||||
// Warningf satisfies the badger interface for a warning logger.
|
||||
func (h *Hook) Warningf(m string, v ...interface{}) {
|
||||
func (h *Hook) Warningf(m string, v ...any) {
|
||||
h.Log.Warn(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
}
|
||||
|
||||
// Infof satisfies the badger interface for an info logger.
|
||||
func (h *Hook) Infof(m string, v ...interface{}) {
|
||||
func (h *Hook) Infof(m string, v ...any) {
|
||||
h.Log.Info(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
}
|
||||
|
||||
// Debugf satisfies the badger interface for a debug logger.
|
||||
func (h *Hook) Debugf(m string, v ...interface{}) {
|
||||
func (h *Hook) Debugf(m string, v ...any) {
|
||||
h.Log.Debug(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
}
|
||||
|
||||
// setKv stores a key-value pair in the database.
|
||||
func (h *Hook) setKv(k string, v storage.Serializable) error {
|
||||
err := h.db.Update(func(txn *badgerdb.Txn) error {
|
||||
data, _ := v.MarshalBinary()
|
||||
return txn.Set([]byte(k), data)
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to upsert data", "error", err, "key", k)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// delKv deletes a key-value pair from the database.
|
||||
func (h *Hook) delKv(k string) error {
|
||||
err := h.db.Update(func(txn *badgerdb.Txn) error {
|
||||
return txn.Delete([]byte(k))
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete data", "error", err, "key", k)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// getKv retrieves the value associated with a key from the database.
|
||||
func (h *Hook) getKv(k string, v storage.Serializable) error {
|
||||
return h.db.View(func(txn *badgerdb.Txn) error {
|
||||
item, err := txn.Get([]byte(k))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
value, err := item.ValueCopy(nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return v.UnmarshalBinary(value)
|
||||
})
|
||||
}
|
||||
|
||||
// iterKv iterates over key-value pairs with keys having the specified prefix in the database.
|
||||
func (h *Hook) iterKv(prefix string, visit func([]byte) error) error {
|
||||
err := h.db.View(func(txn *badgerdb.Txn) error {
|
||||
iterator := txn.NewIterator(badgerdb.DefaultIteratorOptions)
|
||||
defer iterator.Close()
|
||||
|
||||
for iterator.Seek([]byte(prefix)); iterator.ValidForPrefix([]byte(prefix)); iterator.Next() {
|
||||
item := iterator.Item()
|
||||
value, err := item.ValueCopy(nil)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := visit(value); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to find data", "error", err, "prefix", prefix)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -1,22 +1,23 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
// SPDX-FileContributor: mochi-co, werbenhu
|
||||
|
||||
package badger
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"log/slog"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
badgerdb "github.com/dgraph-io/badger/v4"
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage"
|
||||
"github.com/mochi-mqtt/server/v2/packets"
|
||||
"github.com/mochi-mqtt/server/v2/system"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/timshannon/badgerhold"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -39,14 +40,37 @@ var (
|
||||
|
||||
func teardown(t *testing.T, path string, h *Hook) {
|
||||
_ = h.Stop()
|
||||
_ = h.db.Badger().Close()
|
||||
_ = h.db.Close()
|
||||
err := os.RemoveAll("./" + strings.Replace(path, "..", "", -1))
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestSetGetDelKv(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.Init(nil)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
key := "testKey"
|
||||
value := &storage.Client{ID: "cl1"}
|
||||
err := h.setKv(key, value)
|
||||
require.NoError(t, err)
|
||||
|
||||
var client storage.Client
|
||||
err = h.getKv(key, &client)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "cl1", client.ID)
|
||||
|
||||
err = h.delKv(key)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.getKv(key, &client)
|
||||
require.ErrorIs(t, badgerdb.ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestClientKey(t *testing.T) {
|
||||
k := clientKey(&mqtt.Client{ID: "cl1"})
|
||||
require.Equal(t, "cl1", k)
|
||||
require.Equal(t, storage.ClientKey+"_cl1", k)
|
||||
}
|
||||
|
||||
func TestSubscriptionKey(t *testing.T) {
|
||||
@@ -101,6 +125,19 @@ func TestInitBadConfig(t *testing.T) {
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestInitBadOption(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err := h.Init(&Options{
|
||||
Options: &badgerdb.Options{
|
||||
NumCompactors: 1,
|
||||
},
|
||||
})
|
||||
// Cannot have 1 compactor. Need at least 2
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestInitUseDefaults(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
@@ -121,7 +158,7 @@ func TestOnSessionEstablishedThenOnDisconnect(t *testing.T) {
|
||||
h.OnSessionEstablished(client, packets.Packet{})
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.db.Get(clientKey(client), r)
|
||||
err = h.getKv(clientKey(client), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.ID)
|
||||
require.Equal(t, client.Properties.Username, r.Username)
|
||||
@@ -132,15 +169,15 @@ func TestOnSessionEstablishedThenOnDisconnect(t *testing.T) {
|
||||
|
||||
h.OnDisconnect(client, nil, false)
|
||||
r2 := new(storage.Client)
|
||||
err = h.db.Get(clientKey(client), r2)
|
||||
err = h.getKv(clientKey(client), r2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.ID)
|
||||
|
||||
h.OnDisconnect(client, nil, true)
|
||||
r3 := new(storage.Client)
|
||||
err = h.db.Get(clientKey(client), r3)
|
||||
err = h.getKv(clientKey(client), r3)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, badgerhold.ErrNotFound, err)
|
||||
require.ErrorIs(t, badgerdb.ErrKeyNotFound, err)
|
||||
require.Empty(t, r3.ID)
|
||||
}
|
||||
|
||||
@@ -154,18 +191,19 @@ func TestOnClientExpired(t *testing.T) {
|
||||
cl := &mqtt.Client{ID: "cl1"}
|
||||
clientKey := clientKey(cl)
|
||||
|
||||
err = h.db.Upsert(clientKey, &storage.Client{ID: cl.ID})
|
||||
err = h.setKv(clientKey, &storage.Client{ID: cl.ID})
|
||||
require.NoError(t, err)
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.db.Get(clientKey, r)
|
||||
err = h.getKv(clientKey, r)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, cl.ID, r.ID)
|
||||
|
||||
h.OnClientExpired(cl)
|
||||
err = h.db.Get(clientKey, r)
|
||||
err = h.getKv(clientKey, r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, badgerhold.ErrNotFound, err)
|
||||
require.ErrorIs(t, badgerdb.ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnClientExpiredNoDB(t *testing.T) {
|
||||
@@ -210,7 +248,7 @@ func TestOnWillSent(t *testing.T) {
|
||||
h.OnWillSent(c1, packets.Packet{})
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.db.Get(clientKey(client), r)
|
||||
err = h.getKv(clientKey(client), r)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, uint32(1), r.Will.Flag)
|
||||
@@ -265,16 +303,16 @@ func TestOnSubscribedThenOnUnsubscribed(t *testing.T) {
|
||||
h.OnSubscribed(client, pkf, []byte{0})
|
||||
r := new(storage.Subscription)
|
||||
|
||||
err = h.db.Get(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
err = h.getKv(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.Client)
|
||||
require.Equal(t, pkf.Filters[0].Filter, r.Filter)
|
||||
require.Equal(t, byte(0), r.Qos)
|
||||
|
||||
h.OnUnsubscribed(client, pkf)
|
||||
err = h.db.Get(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
err = h.getKv(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, badgerhold.ErrNotFound, err)
|
||||
require.Equal(t, badgerdb.ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnSubscribedNoDB(t *testing.T) {
|
||||
@@ -325,21 +363,21 @@ func TestOnRetainMessageThenUnset(t *testing.T) {
|
||||
h.OnRetainMessage(client, pk, 1)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.db.Get(retainedKey(pk.TopicName), r)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pk.TopicName, r.TopicName)
|
||||
require.Equal(t, pk.Payload, r.Payload)
|
||||
|
||||
h.OnRetainMessage(client, pk, -1)
|
||||
err = h.db.Get(retainedKey(pk.TopicName), r)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, badgerhold.ErrNotFound)
|
||||
require.ErrorIs(t, err, badgerdb.ErrKeyNotFound)
|
||||
|
||||
// coverage: delete deleted
|
||||
h.OnRetainMessage(client, pk, -1)
|
||||
err = h.db.Get(retainedKey(pk.TopicName), r)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, badgerhold.ErrNotFound)
|
||||
require.ErrorIs(t, err, badgerdb.ErrKeyNotFound)
|
||||
}
|
||||
|
||||
func TestOnRetainedExpired(t *testing.T) {
|
||||
@@ -355,18 +393,18 @@ func TestOnRetainedExpired(t *testing.T) {
|
||||
TopicName: "a/b/c",
|
||||
}
|
||||
|
||||
err = h.db.Upsert(m.ID, m)
|
||||
err = h.setKv(m.ID, m)
|
||||
require.NoError(t, err)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.db.Get(m.ID, r)
|
||||
err = h.getKv(m.ID, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, m.TopicName, r.TopicName)
|
||||
|
||||
h.OnRetainedExpired(m.TopicName)
|
||||
err = h.db.Get(m.ID, r)
|
||||
err = h.getKv(m.ID, r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, badgerhold.ErrNotFound)
|
||||
require.ErrorIs(t, err, badgerdb.ErrKeyNotFound)
|
||||
}
|
||||
|
||||
func TestOnRetainExpiredNoDB(t *testing.T) {
|
||||
@@ -418,7 +456,7 @@ func TestOnQosPublishThenQOSComplete(t *testing.T) {
|
||||
h.OnQosPublish(client, pk, time.Now().Unix(), 0)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.db.Get(inflightKey(client, pk), r)
|
||||
err = h.getKv(inflightKey(client, pk), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pk.TopicName, r.TopicName)
|
||||
require.Equal(t, pk.Payload, r.Payload)
|
||||
@@ -429,9 +467,9 @@ func TestOnQosPublishThenQOSComplete(t *testing.T) {
|
||||
|
||||
// OnQosDropped is a passthrough to OnQosComplete here
|
||||
h.OnQosDropped(client, pk)
|
||||
err = h.db.Get(inflightKey(client, pk), r)
|
||||
err = h.getKv(inflightKey(client, pk), r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, badgerhold.ErrNotFound)
|
||||
require.ErrorIs(t, err, badgerdb.ErrKeyNotFound)
|
||||
}
|
||||
|
||||
func TestOnQosPublishNoDB(t *testing.T) {
|
||||
@@ -485,7 +523,7 @@ func TestOnSysInfoTick(t *testing.T) {
|
||||
h.OnSysInfoTick(info)
|
||||
|
||||
r := new(storage.SystemInfo)
|
||||
err = h.db.Get(storage.SysInfoKey, r)
|
||||
err = h.getKv(storage.SysInfoKey, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, info.Version, r.Version)
|
||||
require.Equal(t, info.BytesReceived, r.BytesReceived)
|
||||
@@ -515,13 +553,13 @@ func TestStoredClients(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with clients
|
||||
err = h.db.Upsert("cl1", &storage.Client{ID: "cl1", T: storage.ClientKey})
|
||||
err = h.setKv(storage.ClientKey+"_cl1", &storage.Client{ID: "cl1", T: storage.ClientKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("cl2", &storage.Client{ID: "cl2", T: storage.ClientKey})
|
||||
err = h.setKv(storage.ClientKey+"_cl2", &storage.Client{ID: "cl2", T: storage.ClientKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("cl3", &storage.Client{ID: "cl3", T: storage.ClientKey})
|
||||
err = h.setKv(storage.ClientKey+"_cl3", &storage.Client{ID: "cl3", T: storage.ClientKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredClients()
|
||||
@@ -548,13 +586,13 @@ func TestStoredSubscriptions(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with subscriptions
|
||||
err = h.db.Upsert("sub1", &storage.Subscription{ID: "sub1", T: storage.SubscriptionKey})
|
||||
err = h.setKv(storage.SubscriptionKey+"_sub1", &storage.Subscription{ID: "sub1", T: storage.SubscriptionKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("sub2", &storage.Subscription{ID: "sub2", T: storage.SubscriptionKey})
|
||||
err = h.setKv(storage.SubscriptionKey+"_sub2", &storage.Subscription{ID: "sub2", T: storage.SubscriptionKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("sub3", &storage.Subscription{ID: "sub3", T: storage.SubscriptionKey})
|
||||
err = h.setKv(storage.SubscriptionKey+"_sub3", &storage.Subscription{ID: "sub3", T: storage.SubscriptionKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredSubscriptions()
|
||||
@@ -581,16 +619,16 @@ func TestStoredRetainedMessages(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with messages
|
||||
err = h.db.Upsert("m1", &storage.Message{ID: "m1", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_m1", &storage.Message{ID: "m1", T: storage.RetainedKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("m2", &storage.Message{ID: "m2", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_m2", &storage.Message{ID: "m2", T: storage.RetainedKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("m3", &storage.Message{ID: "m3", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_m3", &storage.Message{ID: "m3", T: storage.RetainedKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("i3", &storage.Message{ID: "i3", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_i3", &storage.Message{ID: "i3", T: storage.InflightKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredRetainedMessages()
|
||||
@@ -617,16 +655,16 @@ func TestStoredInflightMessages(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with messages
|
||||
err = h.db.Upsert("i1", &storage.Message{ID: "i1", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_i1", &storage.Message{ID: "i1", T: storage.InflightKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("i2", &storage.Message{ID: "i2", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_i2", &storage.Message{ID: "i2", T: storage.InflightKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("i3", &storage.Message{ID: "i3", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_i3", &storage.Message{ID: "i3", T: storage.InflightKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Upsert("m1", &storage.Message{ID: "m1", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_m1", &storage.Message{ID: "m1", T: storage.RetainedKey})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredInflightMessages()
|
||||
@@ -653,7 +691,7 @@ func TestStoredSysInfo(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with messages
|
||||
err = h.db.Upsert(storage.SysInfoKey, &storage.SystemInfo{
|
||||
err = h.setKv(storage.SysInfoKey, &storage.SystemInfo{
|
||||
ID: storage.SysInfoKey,
|
||||
Info: system.Info{
|
||||
Version: "2.0.0",
|
||||
@@ -702,3 +740,69 @@ func TestDebugf(t *testing.T) {
|
||||
h.SetOpts(logger, nil)
|
||||
h.Debugf("test", 1, 2, 3)
|
||||
}
|
||||
|
||||
func TestGcLoop(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
opts := badgerdb.DefaultOptions(defaultDbFile)
|
||||
opts.ValueLogFileSize = 1 << 20
|
||||
h.Init(&Options{
|
||||
GcInterval: 2, // Set the interval for garbage collection.
|
||||
Options: &opts,
|
||||
})
|
||||
defer teardown(t, defaultDbFile, h)
|
||||
h.OnSessionEstablished(client, packets.Packet{})
|
||||
h.OnDisconnect(client, nil, true)
|
||||
time.Sleep(3 * time.Second)
|
||||
}
|
||||
|
||||
func TestGetSetDelKv(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv("testKey", &storage.Client{ID: "testId"})
|
||||
require.NoError(t, err)
|
||||
|
||||
var obj storage.Client
|
||||
err = h.getKv("testKey", &obj)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.delKv("testKey")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.getKv("testKey", &obj)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, badgerdb.ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestIterKv(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err := h.Init(nil)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
require.NoError(t, err)
|
||||
|
||||
h.setKv("prefix_a_1", &storage.Client{ID: "1"})
|
||||
h.setKv("prefix_a_2", &storage.Client{ID: "2"})
|
||||
h.setKv("prefix_b_2", &storage.Client{ID: "3"})
|
||||
|
||||
var clients []storage.Client
|
||||
err = h.iterKv("prefix_a", func(data []byte) error {
|
||||
var item storage.Client
|
||||
item.UnmarshalBinary(data)
|
||||
clients = append(clients, item)
|
||||
return nil
|
||||
})
|
||||
require.Equal(t, 2, len(clients))
|
||||
require.NoError(t, err)
|
||||
|
||||
visitErr := errors.New("iter visit error")
|
||||
err = h.iterKv("prefix_b", func(data []byte) error {
|
||||
return visitErr
|
||||
})
|
||||
require.ErrorIs(t, visitErr, err)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
// SPDX-FileContributor: mochi-co, werbenhu
|
||||
|
||||
// Package bolt is provided for historical compatibility and may not be actively updated, you should use the badger hook instead.
|
||||
package bolt
|
||||
@@ -14,23 +14,27 @@ import (
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage"
|
||||
"github.com/mochi-mqtt/server/v2/packets"
|
||||
"github.com/mochi-mqtt/server/v2/system"
|
||||
|
||||
sgob "github.com/asdine/storm/codec/gob"
|
||||
"github.com/asdine/storm/v3"
|
||||
"go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrBucketNotFound = errors.New("bucket not found")
|
||||
ErrKeyNotFound = errors.New("key not found")
|
||||
)
|
||||
|
||||
const (
|
||||
// defaultDbFile is the default file path for the boltdb file.
|
||||
defaultDbFile = "bolt.db"
|
||||
defaultDbFile = ".bolt"
|
||||
|
||||
// defaultTimeout is the default time to hold a connection to the file.
|
||||
defaultTimeout = 250 * time.Millisecond
|
||||
|
||||
defaultBucket = "mochi"
|
||||
)
|
||||
|
||||
// clientKey returns a primary key for a client.
|
||||
func clientKey(cl *mqtt.Client) string {
|
||||
return cl.ID
|
||||
return storage.ClientKey + "_" + cl.ID
|
||||
}
|
||||
|
||||
// subscriptionKey returns a primary key for a subscription.
|
||||
@@ -56,14 +60,15 @@ func sysInfoKey() string {
|
||||
// Options contains configuration settings for the bolt instance.
|
||||
type Options struct {
|
||||
Options *bbolt.Options
|
||||
Path string
|
||||
Bucket string `yaml:"bucket" json:"bucket"`
|
||||
Path string `yaml:"path" json:"path"`
|
||||
}
|
||||
|
||||
// Hook is a persistent storage hook based using boltdb file store as a backend.
|
||||
type Hook struct {
|
||||
mqtt.HookBase
|
||||
config *Options // options for configuring the boltdb instance.
|
||||
db *storm.DB // the boltdb instance.
|
||||
db *bbolt.DB // the boltdb instance.
|
||||
}
|
||||
|
||||
// ID returns the id of the hook.
|
||||
@@ -110,22 +115,32 @@ func (h *Hook) Init(config any) error {
|
||||
Timeout: defaultTimeout,
|
||||
}
|
||||
}
|
||||
if h.config.Path == "" {
|
||||
if len(h.config.Path) == 0 {
|
||||
h.config.Path = defaultDbFile
|
||||
}
|
||||
|
||||
if len(h.config.Bucket) == 0 {
|
||||
h.config.Bucket = defaultBucket
|
||||
}
|
||||
|
||||
var err error
|
||||
h.db, err = storm.Open(h.config.Path, storm.BoltOptions(0600, h.config.Options), storm.Codec(sgob.Codec))
|
||||
h.db, err = bbolt.Open(h.config.Path, 0600, h.config.Options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
err = h.db.Update(func(tx *bbolt.Tx) error {
|
||||
_, err := tx.CreateBucketIfNotExists([]byte(h.config.Bucket))
|
||||
return err
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// Stop closes the boltdb instance.
|
||||
func (h *Hook) Stop() error {
|
||||
return h.db.Close()
|
||||
err := h.db.Close()
|
||||
h.db = nil
|
||||
return err
|
||||
}
|
||||
|
||||
// OnSessionEstablished adds a client to the store when their session is established.
|
||||
@@ -147,7 +162,7 @@ func (h *Hook) updateClient(cl *mqtt.Client) {
|
||||
|
||||
props := cl.Properties.Props.Copy(false)
|
||||
in := &storage.Client{
|
||||
ID: clientKey(cl),
|
||||
ID: cl.ID,
|
||||
T: storage.ClientKey,
|
||||
Remote: cl.Net.Remote,
|
||||
Listener: cl.Net.Listener,
|
||||
@@ -167,10 +182,8 @@ func (h *Hook) updateClient(cl *mqtt.Client) {
|
||||
},
|
||||
Will: storage.ClientWill(cl.Properties.Will),
|
||||
}
|
||||
err := h.db.Save(in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to save client data", "error", err, "data", in)
|
||||
}
|
||||
|
||||
_ = h.setKv(clientKey(cl), in)
|
||||
}
|
||||
|
||||
// OnDisconnect removes a client from the store if they were using a clean session.
|
||||
@@ -188,10 +201,7 @@ func (h *Hook) OnDisconnect(cl *mqtt.Client, _ error, expire bool) {
|
||||
return
|
||||
}
|
||||
|
||||
err := h.db.DeleteStruct(&storage.Client{ID: clientKey(cl)})
|
||||
if err != nil && !errors.Is(err, storm.ErrNotFound) {
|
||||
h.Log.Error("failed to delete client", "error", err, "id", clientKey(cl))
|
||||
}
|
||||
_ = h.delKv(clientKey(cl))
|
||||
}
|
||||
|
||||
// OnSubscribed adds one or more client subscriptions to the store.
|
||||
@@ -214,11 +224,7 @@ func (h *Hook) OnSubscribed(cl *mqtt.Client, pk packets.Packet, reasonCodes []by
|
||||
RetainHandling: pk.Filters[i].RetainHandling,
|
||||
RetainAsPublished: pk.Filters[i].RetainAsPublished,
|
||||
}
|
||||
|
||||
err := h.db.Save(in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to save subscription data", "error", err, "client", cl.ID, "data", in)
|
||||
}
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -230,12 +236,7 @@ func (h *Hook) OnUnsubscribed(cl *mqtt.Client, pk packets.Packet) {
|
||||
}
|
||||
|
||||
for i := 0; i < len(pk.Filters); i++ {
|
||||
err := h.db.DeleteStruct(&storage.Subscription{
|
||||
ID: subscriptionKey(cl, pk.Filters[i].Filter),
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete client", "error", err, "id", subscriptionKey(cl, pk.Filters[i].Filter))
|
||||
}
|
||||
_ = h.delKv(subscriptionKey(cl, pk.Filters[i].Filter))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -247,12 +248,7 @@ func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
}
|
||||
|
||||
if r == -1 {
|
||||
err := h.db.DeleteStruct(&storage.Message{
|
||||
ID: retainedKey(pk.TopicName),
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete retained publish", "error", err, "id", retainedKey(pk.TopicName))
|
||||
}
|
||||
_ = h.delKv(retainedKey(pk.TopicName))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -264,6 +260,7 @@ func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
TopicName: pk.TopicName,
|
||||
Payload: pk.Payload,
|
||||
Created: pk.Created,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
Properties: storage.MessageProperties{
|
||||
PayloadFormat: props.PayloadFormat,
|
||||
@@ -276,10 +273,8 @@ func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
User: props.User,
|
||||
},
|
||||
}
|
||||
err := h.db.Save(in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to save retained publish data", "error", err, "client", cl.ID, "data", in)
|
||||
}
|
||||
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnQosPublish adds or updates an inflight message in the store.
|
||||
@@ -293,6 +288,7 @@ func (h *Hook) OnQosPublish(cl *mqtt.Client, pk packets.Packet, sent int64, rese
|
||||
in := &storage.Message{
|
||||
ID: inflightKey(cl, pk),
|
||||
T: storage.InflightKey,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
FixedHeader: pk.FixedHeader,
|
||||
TopicName: pk.TopicName,
|
||||
@@ -311,10 +307,7 @@ func (h *Hook) OnQosPublish(cl *mqtt.Client, pk packets.Packet, sent int64, rese
|
||||
},
|
||||
}
|
||||
|
||||
err := h.db.Save(in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to save qos inflight data", "error", err, "client", cl.ID, "data", in)
|
||||
}
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnQosComplete removes a resolved inflight message from the store.
|
||||
@@ -324,12 +317,7 @@ func (h *Hook) OnQosComplete(cl *mqtt.Client, pk packets.Packet) {
|
||||
return
|
||||
}
|
||||
|
||||
err := h.db.DeleteStruct(&storage.Message{
|
||||
ID: inflightKey(cl, pk),
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete inflight data", "error", err, "id", inflightKey(cl, pk))
|
||||
}
|
||||
_ = h.delKv(inflightKey(cl, pk))
|
||||
}
|
||||
|
||||
// OnQosDropped removes a dropped inflight message from the store.
|
||||
@@ -354,10 +342,7 @@ func (h *Hook) OnSysInfoTick(sys *system.Info) {
|
||||
Info: *sys,
|
||||
}
|
||||
|
||||
err := h.db.Save(in)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to save $SYS data", "error", err, "data", in)
|
||||
}
|
||||
_ = h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnRetainedExpired deletes expired retained messages from the store.
|
||||
@@ -366,10 +351,7 @@ func (h *Hook) OnRetainedExpired(filter string) {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.db.DeleteStruct(&storage.Message{ID: retainedKey(filter)}); err != nil {
|
||||
h.Log.Error("failed to delete retained publish", "error", err, "id", retainedKey(filter))
|
||||
}
|
||||
_ = h.delKv(retainedKey(filter))
|
||||
}
|
||||
|
||||
// OnClientExpired deleted expired clients from the store.
|
||||
@@ -378,25 +360,24 @@ func (h *Hook) OnClientExpired(cl *mqtt.Client) {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
err := h.db.DeleteStruct(&storage.Client{ID: clientKey(cl)})
|
||||
if err != nil && !errors.Is(err, storm.ErrNotFound) {
|
||||
h.Log.Error("failed to delete expired client", "error", err, "id", clientKey(cl))
|
||||
}
|
||||
_ = h.delKv(clientKey(cl))
|
||||
}
|
||||
|
||||
// StoredClients returns all stored clients from the store.
|
||||
func (h *Hook) StoredClients() (v []storage.Client, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
err = h.db.Find("T", storage.ClientKey, &v)
|
||||
if err != nil && !errors.Is(err, storm.ErrNotFound) {
|
||||
return
|
||||
return v, storage.ErrDBFileNotOpen
|
||||
}
|
||||
|
||||
err = h.iterKv(storage.ClientKey, func(value []byte) error {
|
||||
obj := storage.Client{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
return v, nil
|
||||
}
|
||||
|
||||
@@ -404,58 +385,142 @@ func (h *Hook) StoredClients() (v []storage.Client, err error) {
|
||||
func (h *Hook) StoredSubscriptions() (v []storage.Subscription, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
return v, storage.ErrDBFileNotOpen
|
||||
}
|
||||
|
||||
err = h.db.Find("T", storage.SubscriptionKey, &v)
|
||||
if err != nil && !errors.Is(err, storm.ErrNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
v = make([]storage.Subscription, 0)
|
||||
err = h.iterKv(storage.SubscriptionKey, func(value []byte) error {
|
||||
obj := storage.Subscription{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// StoredRetainedMessages returns all stored retained messages from the store.
|
||||
func (h *Hook) StoredRetainedMessages() (v []storage.Message, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
return v, storage.ErrDBFileNotOpen
|
||||
}
|
||||
|
||||
err = h.db.Find("T", storage.RetainedKey, &v)
|
||||
if err != nil && !errors.Is(err, storm.ErrNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
v = make([]storage.Message, 0)
|
||||
err = h.iterKv(storage.RetainedKey, func(value []byte) error {
|
||||
obj := storage.Message{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// StoredInflightMessages returns all stored inflight messages from the store.
|
||||
func (h *Hook) StoredInflightMessages() (v []storage.Message, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
return v, storage.ErrDBFileNotOpen
|
||||
}
|
||||
|
||||
err = h.db.Find("T", storage.InflightKey, &v)
|
||||
if err != nil && !errors.Is(err, storm.ErrNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
v = make([]storage.Message, 0)
|
||||
err = h.iterKv(storage.InflightKey, func(value []byte) error {
|
||||
obj := storage.Message{}
|
||||
err = obj.UnmarshalBinary(value)
|
||||
if err == nil {
|
||||
v = append(v, obj)
|
||||
}
|
||||
return err
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// StoredSysInfo returns the system info from the store.
|
||||
func (h *Hook) StoredSysInfo() (v storage.SystemInfo, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
return v, storage.ErrDBFileNotOpen
|
||||
}
|
||||
|
||||
err = h.db.One("ID", storage.SysInfoKey, &v)
|
||||
if err != nil && !errors.Is(err, storm.ErrNotFound) {
|
||||
err = h.getKv(storage.SysInfoKey, &v)
|
||||
if err != nil && !errors.Is(err, ErrKeyNotFound) {
|
||||
return
|
||||
}
|
||||
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// setKv stores a key-value pair in the database.
|
||||
func (h *Hook) setKv(k string, v storage.Serializable) error {
|
||||
err := h.db.Update(func(tx *bbolt.Tx) error {
|
||||
|
||||
bucket := tx.Bucket([]byte(h.config.Bucket))
|
||||
data, _ := v.MarshalBinary()
|
||||
err := bucket.Put([]byte(k), data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to upsert data", "error", err, "key", k)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// delKv deletes a key-value pair from the database.
|
||||
func (h *Hook) delKv(k string) error {
|
||||
err := h.db.Update(func(tx *bbolt.Tx) error {
|
||||
bucket := tx.Bucket([]byte(h.config.Bucket))
|
||||
err := bucket.Delete([]byte(k))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete data", "error", err, "key", k)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// getKv retrieves the value associated with a key from the database.
|
||||
func (h *Hook) getKv(k string, v storage.Serializable) error {
|
||||
err := h.db.View(func(tx *bbolt.Tx) error {
|
||||
bucket := tx.Bucket([]byte(h.config.Bucket))
|
||||
|
||||
value := bucket.Get([]byte(k))
|
||||
if value == nil {
|
||||
return ErrKeyNotFound
|
||||
}
|
||||
|
||||
return v.UnmarshalBinary(value)
|
||||
})
|
||||
if err != nil {
|
||||
h.Log.Error("failed to get data", "error", err, "key", k)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// iterKv iterates over key-value pairs with keys having the specified prefix in the database.
|
||||
func (h *Hook) iterKv(prefix string, visit func([]byte) error) error {
|
||||
err := h.db.View(func(tx *bbolt.Tx) error {
|
||||
bucket := tx.Bucket([]byte(h.config.Bucket))
|
||||
|
||||
c := bucket.Cursor()
|
||||
for k, v := c.Seek([]byte(prefix)); k != nil && string(k[:len(prefix)]) == prefix; k, v = c.Next() {
|
||||
if err := visit(v); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
h.Log.Error("failed to iter data", "error", err, "prefix", prefix)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: mochi-co
|
||||
// SPDX-FileContributor: mochi-co, werbenhu
|
||||
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"log/slog"
|
||||
"os"
|
||||
"testing"
|
||||
@@ -15,7 +16,6 @@ import (
|
||||
"github.com/mochi-mqtt/server/v2/packets"
|
||||
"github.com/mochi-mqtt/server/v2/system"
|
||||
|
||||
"github.com/asdine/storm/v3"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
@@ -45,7 +45,7 @@ func teardown(t *testing.T, path string, h *Hook) {
|
||||
|
||||
func TestClientKey(t *testing.T) {
|
||||
k := clientKey(&mqtt.Client{ID: "cl1"})
|
||||
require.Equal(t, "cl1", k)
|
||||
require.Equal(t, "CL_cl1", k)
|
||||
}
|
||||
|
||||
func TestSubscriptionKey(t *testing.T) {
|
||||
@@ -130,7 +130,7 @@ func TestOnSessionEstablishedThenOnDisconnect(t *testing.T) {
|
||||
h.OnSessionEstablished(client, packets.Packet{})
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.db.One("ID", clientKey(client), r)
|
||||
err = h.getKv(clientKey(client), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.ID)
|
||||
require.Equal(t, client.Net.Remote, r.Remote)
|
||||
@@ -141,15 +141,15 @@ func TestOnSessionEstablishedThenOnDisconnect(t *testing.T) {
|
||||
|
||||
h.OnDisconnect(client, nil, false)
|
||||
r2 := new(storage.Client)
|
||||
err = h.db.One("ID", clientKey(client), r2)
|
||||
err = h.getKv(clientKey(client), r2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.ID)
|
||||
|
||||
h.OnDisconnect(client, nil, true)
|
||||
r3 := new(storage.Client)
|
||||
err = h.db.One("ID", clientKey(client), r3)
|
||||
err = h.getKv(clientKey(client), r3)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, storm.ErrNotFound, err)
|
||||
require.ErrorIs(t, ErrKeyNotFound, err)
|
||||
require.Empty(t, r3.ID)
|
||||
}
|
||||
|
||||
@@ -180,7 +180,7 @@ func TestOnWillSent(t *testing.T) {
|
||||
h.OnWillSent(c1, packets.Packet{})
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.db.One("ID", clientKey(client), r)
|
||||
err = h.getKv(clientKey(client), r)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, uint32(1), r.Will.Flag)
|
||||
@@ -197,18 +197,18 @@ func TestOnClientExpired(t *testing.T) {
|
||||
cl := &mqtt.Client{ID: "cl1"}
|
||||
clientKey := clientKey(cl)
|
||||
|
||||
err = h.db.Save(&storage.Client{ID: cl.ID})
|
||||
err = h.setKv(clientKey, &storage.Client{ID: cl.ID})
|
||||
require.NoError(t, err)
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.db.One("ID", clientKey, r)
|
||||
err = h.getKv(clientKey, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, cl.ID, r.ID)
|
||||
|
||||
h.OnClientExpired(cl)
|
||||
err = h.db.One("ID", clientKey, r)
|
||||
err = h.getKv(clientKey, r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, storm.ErrNotFound, err)
|
||||
require.ErrorIs(t, ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnClientExpiredClosedDB(t *testing.T) {
|
||||
@@ -274,16 +274,16 @@ func TestOnSubscribedThenOnUnsubscribed(t *testing.T) {
|
||||
h.OnSubscribed(client, pkf, []byte{0})
|
||||
r := new(storage.Subscription)
|
||||
|
||||
err = h.db.One("ID", subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
err = h.getKv(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.Client)
|
||||
require.Equal(t, pkf.Filters[0].Filter, r.Filter)
|
||||
require.Equal(t, byte(0), r.Qos)
|
||||
|
||||
h.OnUnsubscribed(client, pkf)
|
||||
err = h.db.One("ID", subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
err = h.getKv(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, storm.ErrNotFound, err)
|
||||
require.Equal(t, ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnSubscribedNoDB(t *testing.T) {
|
||||
@@ -334,21 +334,21 @@ func TestOnRetainMessageThenUnset(t *testing.T) {
|
||||
h.OnRetainMessage(client, pk, 1)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.db.One("ID", retainedKey(pk.TopicName), r)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pk.TopicName, r.TopicName)
|
||||
require.Equal(t, pk.Payload, r.Payload)
|
||||
|
||||
h.OnRetainMessage(client, pk, -1)
|
||||
err = h.db.One("ID", retainedKey(pk.TopicName), r)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, storm.ErrNotFound, err)
|
||||
require.Equal(t, ErrKeyNotFound, err)
|
||||
|
||||
// coverage: delete deleted
|
||||
h.OnRetainMessage(client, pk, -1)
|
||||
err = h.db.One("ID", retainedKey(pk.TopicName), r)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, storm.ErrNotFound, err)
|
||||
require.Equal(t, ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnRetainedExpired(t *testing.T) {
|
||||
@@ -364,18 +364,18 @@ func TestOnRetainedExpired(t *testing.T) {
|
||||
TopicName: "a/b/c",
|
||||
}
|
||||
|
||||
err = h.db.Save(m)
|
||||
err = h.setKv(m.ID, m)
|
||||
require.NoError(t, err)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.db.One("ID", m.ID, r)
|
||||
err = h.getKv(m.ID, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, m.TopicName, r.TopicName)
|
||||
|
||||
h.OnRetainedExpired(m.TopicName)
|
||||
err = h.db.One("ID", m.ID, r)
|
||||
err = h.getKv(m.ID, r)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, storm.ErrNotFound, err)
|
||||
require.Equal(t, ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnRetainedExpiredClosedDB(t *testing.T) {
|
||||
@@ -427,7 +427,7 @@ func TestOnQosPublishThenQOSComplete(t *testing.T) {
|
||||
h.OnQosPublish(client, pk, time.Now().Unix(), 0)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.db.One("ID", inflightKey(client, pk), r)
|
||||
err = h.getKv(inflightKey(client, pk), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pk.TopicName, r.TopicName)
|
||||
require.Equal(t, pk.Payload, r.Payload)
|
||||
@@ -438,9 +438,9 @@ func TestOnQosPublishThenQOSComplete(t *testing.T) {
|
||||
|
||||
// OnQosDropped is a passthrough to OnQosComplete here
|
||||
h.OnQosDropped(client, pk)
|
||||
err = h.db.One("ID", inflightKey(client, pk), r)
|
||||
err = h.getKv(inflightKey(client, pk), r)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, storm.ErrNotFound, err)
|
||||
require.Equal(t, ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnQosPublishNoDB(t *testing.T) {
|
||||
@@ -494,7 +494,7 @@ func TestOnSysInfoTick(t *testing.T) {
|
||||
h.OnSysInfoTick(info)
|
||||
|
||||
r := new(storage.SystemInfo)
|
||||
err = h.db.One("ID", storage.SysInfoKey, r)
|
||||
err = h.getKv(storage.SysInfoKey, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, info.Version, r.Version)
|
||||
require.Equal(t, info.BytesReceived, r.BytesReceived)
|
||||
@@ -524,13 +524,13 @@ func TestStoredClients(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with clients
|
||||
err = h.db.Save(&storage.Client{ID: "cl1", T: storage.ClientKey})
|
||||
err = h.setKv(storage.ClientKey+"_"+"cl1", &storage.Client{ID: "cl1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Client{ID: "cl2", T: storage.ClientKey})
|
||||
err = h.setKv(storage.ClientKey+"_"+"cl2", &storage.Client{ID: "cl2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Client{ID: "cl3", T: storage.ClientKey})
|
||||
err = h.setKv(storage.ClientKey+"_"+"cl3", &storage.Client{ID: "cl3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredClients()
|
||||
@@ -546,7 +546,7 @@ func TestStoredClientsNoDB(t *testing.T) {
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredClients()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredClientsClosedDB(t *testing.T) {
|
||||
@@ -557,7 +557,7 @@ func TestStoredClientsClosedDB(t *testing.T) {
|
||||
teardown(t, h.config.Path, h)
|
||||
v, err := h.StoredClients()
|
||||
require.Empty(t, v)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredSubscriptions(t *testing.T) {
|
||||
@@ -568,13 +568,13 @@ func TestStoredSubscriptions(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with subscriptions
|
||||
err = h.db.Save(&storage.Subscription{ID: "sub1", T: storage.SubscriptionKey})
|
||||
err = h.setKv(storage.SubscriptionKey+"_"+"sub1", &storage.Subscription{ID: "sub1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Subscription{ID: "sub2", T: storage.SubscriptionKey})
|
||||
err = h.setKv(storage.SubscriptionKey+"_"+"sub2", &storage.Subscription{ID: "sub2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Subscription{ID: "sub3", T: storage.SubscriptionKey})
|
||||
err = h.setKv(storage.SubscriptionKey+"_"+"sub3", &storage.Subscription{ID: "sub3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredSubscriptions()
|
||||
@@ -590,7 +590,7 @@ func TestStoredSubscriptionsNoDB(t *testing.T) {
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredSubscriptions()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredSubscriptionsClosedDB(t *testing.T) {
|
||||
@@ -601,7 +601,7 @@ func TestStoredSubscriptionsClosedDB(t *testing.T) {
|
||||
teardown(t, h.config.Path, h)
|
||||
v, err := h.StoredSubscriptions()
|
||||
require.Empty(t, v)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredRetainedMessages(t *testing.T) {
|
||||
@@ -612,16 +612,16 @@ func TestStoredRetainedMessages(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with messages
|
||||
err = h.db.Save(&storage.Message{ID: "m1", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_"+"m1", &storage.Message{ID: "m1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Message{ID: "m2", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_"+"m2", &storage.Message{ID: "m2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Message{ID: "m3", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_"+"m3", &storage.Message{ID: "m3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Message{ID: "i3", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_"+"i3", &storage.Message{ID: "i3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredRetainedMessages()
|
||||
@@ -637,7 +637,7 @@ func TestStoredRetainedMessagesNoDB(t *testing.T) {
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredRetainedMessages()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredRetainedMessagesClosedDB(t *testing.T) {
|
||||
@@ -659,16 +659,16 @@ func TestStoredInflightMessages(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with messages
|
||||
err = h.db.Save(&storage.Message{ID: "i1", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_"+"i1", &storage.Message{ID: "i1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Message{ID: "i2", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_"+"i2", &storage.Message{ID: "i2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Message{ID: "i3", T: storage.InflightKey})
|
||||
err = h.setKv(storage.InflightKey+"_"+"i3", &storage.Message{ID: "i3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.db.Save(&storage.Message{ID: "m1", T: storage.RetainedKey})
|
||||
err = h.setKv(storage.RetainedKey+"_"+"m1", &storage.Message{ID: "m1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredInflightMessages()
|
||||
@@ -684,7 +684,7 @@ func TestStoredInflightMessagesNoDB(t *testing.T) {
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredInflightMessages()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredInflightMessagesClosedDB(t *testing.T) {
|
||||
@@ -695,7 +695,7 @@ func TestStoredInflightMessagesClosedDB(t *testing.T) {
|
||||
teardown(t, h.config.Path, h)
|
||||
v, err := h.StoredInflightMessages()
|
||||
require.Empty(t, v)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredSysInfo(t *testing.T) {
|
||||
@@ -706,7 +706,7 @@ func TestStoredSysInfo(t *testing.T) {
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with sys info
|
||||
err = h.db.Save(&storage.SystemInfo{
|
||||
err = h.setKv(storage.SysInfoKey, &storage.SystemInfo{
|
||||
ID: storage.SysInfoKey,
|
||||
Info: system.Info{
|
||||
Version: "2.0.0",
|
||||
@@ -725,7 +725,7 @@ func TestStoredSysInfoNoDB(t *testing.T) {
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredSysInfo()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
require.ErrorIs(t, storage.ErrDBFileNotOpen, err)
|
||||
}
|
||||
|
||||
func TestStoredSysInfoClosedDB(t *testing.T) {
|
||||
@@ -738,3 +738,54 @@ func TestStoredSysInfoClosedDB(t *testing.T) {
|
||||
require.Empty(t, v)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGetSetDelKv(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv("testId", &storage.Client{ID: "testId"})
|
||||
require.NoError(t, err)
|
||||
|
||||
var obj storage.Client
|
||||
err = h.getKv("testId", &obj)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.delKv("testId")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.getKv("testId", &obj)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, ErrKeyNotFound, err)
|
||||
}
|
||||
|
||||
func TestIterKv(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err := h.Init(nil)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
require.NoError(t, err)
|
||||
|
||||
h.setKv("prefix_a_1", &storage.Client{ID: "1"})
|
||||
h.setKv("prefix_a_2", &storage.Client{ID: "2"})
|
||||
h.setKv("prefix_b_2", &storage.Client{ID: "3"})
|
||||
|
||||
var clients []storage.Client
|
||||
err = h.iterKv("prefix_a", func(data []byte) error {
|
||||
var item storage.Client
|
||||
item.UnmarshalBinary(data)
|
||||
clients = append(clients, item)
|
||||
return nil
|
||||
})
|
||||
require.Equal(t, 2, len(clients))
|
||||
require.NoError(t, err)
|
||||
|
||||
visitErr := errors.New("iter visit error")
|
||||
err = h.iterKv("prefix_b", func(data []byte) error {
|
||||
return visitErr
|
||||
})
|
||||
require.ErrorIs(t, visitErr, err)
|
||||
}
|
||||
|
||||
526
hooks/storage/pebble/pebble.go
Normal file
526
hooks/storage/pebble/pebble.go
Normal file
@@ -0,0 +1,526 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: werbenhu
|
||||
|
||||
package pebble
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
pebbledb "github.com/cockroachdb/pebble"
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage"
|
||||
"github.com/mochi-mqtt/server/v2/packets"
|
||||
"github.com/mochi-mqtt/server/v2/system"
|
||||
)
|
||||
|
||||
const (
|
||||
// defaultDbFile is the default file path for the pebble db file.
|
||||
defaultDbFile = ".pebble"
|
||||
)
|
||||
|
||||
// clientKey returns a primary key for a client.
|
||||
func clientKey(cl *mqtt.Client) string {
|
||||
return storage.ClientKey + "_" + cl.ID
|
||||
}
|
||||
|
||||
// subscriptionKey returns a primary key for a subscription.
|
||||
func subscriptionKey(cl *mqtt.Client, filter string) string {
|
||||
return storage.SubscriptionKey + "_" + cl.ID + ":" + filter
|
||||
}
|
||||
|
||||
// retainedKey returns a primary key for a retained message.
|
||||
func retainedKey(topic string) string {
|
||||
return storage.RetainedKey + "_" + topic
|
||||
}
|
||||
|
||||
// inflightKey returns a primary key for an inflight message.
|
||||
func inflightKey(cl *mqtt.Client, pk packets.Packet) string {
|
||||
return storage.InflightKey + "_" + cl.ID + ":" + pk.FormatID()
|
||||
}
|
||||
|
||||
// sysInfoKey returns a primary key for system info.
|
||||
func sysInfoKey() string {
|
||||
return storage.SysInfoKey
|
||||
}
|
||||
|
||||
// keyUpperBound returns the upper bound for a given byte slice by incrementing the last byte.
|
||||
// It returns nil if all bytes are incremented and equal to 0.
|
||||
func keyUpperBound(b []byte) []byte {
|
||||
end := make([]byte, len(b))
|
||||
copy(end, b)
|
||||
for i := len(end) - 1; i >= 0; i-- {
|
||||
end[i] = end[i] + 1
|
||||
if end[i] != 0 {
|
||||
return end[:i+1]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
const (
|
||||
NoSync = "NoSync" // NoSync specifies the default write options for writes which do not synchronize to disk.
|
||||
Sync = "Sync" // Sync specifies the default write options for writes which synchronize to disk.
|
||||
)
|
||||
|
||||
// Options contains configuration settings for the pebble DB instance.
|
||||
type Options struct {
|
||||
Options *pebbledb.Options
|
||||
Mode string `yaml:"mode" json:"mode"`
|
||||
Path string `yaml:"path" json:"path"`
|
||||
}
|
||||
|
||||
// Hook is a persistent storage hook based using pebble DB file store as a backend.
|
||||
type Hook struct {
|
||||
mqtt.HookBase
|
||||
config *Options // options for configuring the pebble DB instance.
|
||||
db *pebbledb.DB // the pebble DB instance
|
||||
mode *pebbledb.WriteOptions // mode holds the optional per-query parameters for Set and Delete operations
|
||||
}
|
||||
|
||||
// ID returns the id of the hook.
|
||||
func (h *Hook) ID() string {
|
||||
return "pebble-db"
|
||||
}
|
||||
|
||||
// Provides indicates which hook methods this hook provides.
|
||||
func (h *Hook) Provides(b byte) bool {
|
||||
return bytes.Contains([]byte{
|
||||
mqtt.OnSessionEstablished,
|
||||
mqtt.OnDisconnect,
|
||||
mqtt.OnSubscribed,
|
||||
mqtt.OnUnsubscribed,
|
||||
mqtt.OnRetainMessage,
|
||||
mqtt.OnWillSent,
|
||||
mqtt.OnQosPublish,
|
||||
mqtt.OnQosComplete,
|
||||
mqtt.OnQosDropped,
|
||||
mqtt.OnSysInfoTick,
|
||||
mqtt.OnClientExpired,
|
||||
mqtt.OnRetainedExpired,
|
||||
mqtt.StoredClients,
|
||||
mqtt.StoredInflightMessages,
|
||||
mqtt.StoredRetainedMessages,
|
||||
mqtt.StoredSubscriptions,
|
||||
mqtt.StoredSysInfo,
|
||||
}, []byte{b})
|
||||
}
|
||||
|
||||
// Init initializes and connects to the pebble instance.
|
||||
func (h *Hook) Init(config any) error {
|
||||
if _, ok := config.(*Options); !ok && config != nil {
|
||||
return mqtt.ErrInvalidConfigType
|
||||
}
|
||||
|
||||
if config == nil {
|
||||
h.config = new(Options)
|
||||
} else {
|
||||
h.config = config.(*Options)
|
||||
}
|
||||
|
||||
if len(h.config.Path) == 0 {
|
||||
h.config.Path = defaultDbFile
|
||||
}
|
||||
|
||||
if h.config.Options == nil {
|
||||
h.config.Options = &pebbledb.Options{}
|
||||
}
|
||||
|
||||
h.mode = pebbledb.NoSync
|
||||
if strings.EqualFold(h.config.Mode, "Sync") {
|
||||
h.mode = pebbledb.Sync
|
||||
}
|
||||
|
||||
var err error
|
||||
h.db, err = pebbledb.Open(h.config.Path, h.config.Options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop closes the pebble instance.
|
||||
func (h *Hook) Stop() error {
|
||||
err := h.db.Close()
|
||||
h.db = nil
|
||||
return err
|
||||
}
|
||||
|
||||
// OnSessionEstablished adds a client to the store when their session is established.
|
||||
func (h *Hook) OnSessionEstablished(cl *mqtt.Client, pk packets.Packet) {
|
||||
h.updateClient(cl)
|
||||
}
|
||||
|
||||
// OnWillSent is called when a client sends a Will Message and the Will Message is removed from the client record.
|
||||
func (h *Hook) OnWillSent(cl *mqtt.Client, pk packets.Packet) {
|
||||
h.updateClient(cl)
|
||||
}
|
||||
|
||||
// updateClient writes the client data to the store.
|
||||
func (h *Hook) updateClient(cl *mqtt.Client) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
props := cl.Properties.Props.Copy(false)
|
||||
in := &storage.Client{
|
||||
ID: cl.ID,
|
||||
T: storage.ClientKey,
|
||||
Remote: cl.Net.Remote,
|
||||
Listener: cl.Net.Listener,
|
||||
Username: cl.Properties.Username,
|
||||
Clean: cl.Properties.Clean,
|
||||
ProtocolVersion: cl.Properties.ProtocolVersion,
|
||||
Properties: storage.ClientProperties{
|
||||
SessionExpiryInterval: props.SessionExpiryInterval,
|
||||
AuthenticationMethod: props.AuthenticationMethod,
|
||||
AuthenticationData: props.AuthenticationData,
|
||||
RequestProblemInfo: props.RequestProblemInfo,
|
||||
RequestResponseInfo: props.RequestResponseInfo,
|
||||
ReceiveMaximum: props.ReceiveMaximum,
|
||||
TopicAliasMaximum: props.TopicAliasMaximum,
|
||||
User: props.User,
|
||||
MaximumPacketSize: props.MaximumPacketSize,
|
||||
},
|
||||
Will: storage.ClientWill(cl.Properties.Will),
|
||||
}
|
||||
h.setKv(clientKey(cl), in)
|
||||
}
|
||||
|
||||
// OnDisconnect removes a client from the store if their session has expired.
|
||||
func (h *Hook) OnDisconnect(cl *mqtt.Client, _ error, expire bool) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
h.updateClient(cl)
|
||||
|
||||
if !expire {
|
||||
return
|
||||
}
|
||||
|
||||
if errors.Is(cl.StopCause(), packets.ErrSessionTakenOver) {
|
||||
return
|
||||
}
|
||||
|
||||
h.delKv(clientKey(cl))
|
||||
}
|
||||
|
||||
// OnSubscribed adds one or more client subscriptions to the store.
|
||||
func (h *Hook) OnSubscribed(cl *mqtt.Client, pk packets.Packet, reasonCodes []byte) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
var in *storage.Subscription
|
||||
for i := 0; i < len(pk.Filters); i++ {
|
||||
in = &storage.Subscription{
|
||||
ID: subscriptionKey(cl, pk.Filters[i].Filter),
|
||||
T: storage.SubscriptionKey,
|
||||
Client: cl.ID,
|
||||
Qos: reasonCodes[i],
|
||||
Filter: pk.Filters[i].Filter,
|
||||
Identifier: pk.Filters[i].Identifier,
|
||||
NoLocal: pk.Filters[i].NoLocal,
|
||||
RetainHandling: pk.Filters[i].RetainHandling,
|
||||
RetainAsPublished: pk.Filters[i].RetainAsPublished,
|
||||
}
|
||||
h.setKv(in.ID, in)
|
||||
}
|
||||
}
|
||||
|
||||
// OnUnsubscribed removes one or more client subscriptions from the store.
|
||||
func (h *Hook) OnUnsubscribed(cl *mqtt.Client, pk packets.Packet) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
for i := 0; i < len(pk.Filters); i++ {
|
||||
h.delKv(subscriptionKey(cl, pk.Filters[i].Filter))
|
||||
}
|
||||
}
|
||||
|
||||
// OnRetainMessage adds a retained message for a topic to the store.
|
||||
func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
if r == -1 {
|
||||
h.delKv(retainedKey(pk.TopicName))
|
||||
return
|
||||
}
|
||||
|
||||
props := pk.Properties.Copy(false)
|
||||
in := &storage.Message{
|
||||
ID: retainedKey(pk.TopicName),
|
||||
T: storage.RetainedKey,
|
||||
FixedHeader: pk.FixedHeader,
|
||||
TopicName: pk.TopicName,
|
||||
Payload: pk.Payload,
|
||||
Created: pk.Created,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
Properties: storage.MessageProperties{
|
||||
PayloadFormat: props.PayloadFormat,
|
||||
MessageExpiryInterval: props.MessageExpiryInterval,
|
||||
ContentType: props.ContentType,
|
||||
ResponseTopic: props.ResponseTopic,
|
||||
CorrelationData: props.CorrelationData,
|
||||
SubscriptionIdentifier: props.SubscriptionIdentifier,
|
||||
TopicAlias: props.TopicAlias,
|
||||
User: props.User,
|
||||
},
|
||||
}
|
||||
|
||||
h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnQosPublish adds or updates an inflight message in the store.
|
||||
func (h *Hook) OnQosPublish(cl *mqtt.Client, pk packets.Packet, sent int64, resends int) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
props := pk.Properties.Copy(false)
|
||||
in := &storage.Message{
|
||||
ID: inflightKey(cl, pk),
|
||||
T: storage.InflightKey,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
PacketID: pk.PacketID,
|
||||
FixedHeader: pk.FixedHeader,
|
||||
TopicName: pk.TopicName,
|
||||
Payload: pk.Payload,
|
||||
Sent: sent,
|
||||
Created: pk.Created,
|
||||
Properties: storage.MessageProperties{
|
||||
PayloadFormat: props.PayloadFormat,
|
||||
MessageExpiryInterval: props.MessageExpiryInterval,
|
||||
ContentType: props.ContentType,
|
||||
ResponseTopic: props.ResponseTopic,
|
||||
CorrelationData: props.CorrelationData,
|
||||
SubscriptionIdentifier: props.SubscriptionIdentifier,
|
||||
TopicAlias: props.TopicAlias,
|
||||
User: props.User,
|
||||
},
|
||||
}
|
||||
h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnQosComplete removes a resolved inflight message from the store.
|
||||
func (h *Hook) OnQosComplete(cl *mqtt.Client, pk packets.Packet) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
h.delKv(inflightKey(cl, pk))
|
||||
}
|
||||
|
||||
// OnQosDropped removes a dropped inflight message from the store.
|
||||
func (h *Hook) OnQosDropped(cl *mqtt.Client, pk packets.Packet) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
}
|
||||
|
||||
h.OnQosComplete(cl, pk)
|
||||
}
|
||||
|
||||
// OnSysInfoTick stores the latest system info in the store.
|
||||
func (h *Hook) OnSysInfoTick(sys *system.Info) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
in := &storage.SystemInfo{
|
||||
ID: sysInfoKey(),
|
||||
T: storage.SysInfoKey,
|
||||
Info: *sys.Clone(),
|
||||
}
|
||||
h.setKv(in.ID, in)
|
||||
}
|
||||
|
||||
// OnRetainedExpired deletes expired retained messages from the store.
|
||||
func (h *Hook) OnRetainedExpired(filter string) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
h.delKv(retainedKey(filter))
|
||||
}
|
||||
|
||||
// OnClientExpired deleted expired clients from the store.
|
||||
func (h *Hook) OnClientExpired(cl *mqtt.Client) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
h.delKv(clientKey(cl))
|
||||
}
|
||||
|
||||
// StoredClients returns all stored clients from the store.
|
||||
func (h *Hook) StoredClients() (v []storage.Client, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
iter, _ := h.db.NewIter(&pebbledb.IterOptions{
|
||||
LowerBound: []byte(storage.ClientKey),
|
||||
UpperBound: keyUpperBound([]byte(storage.ClientKey)),
|
||||
})
|
||||
|
||||
for iter.First(); iter.Valid(); iter.Next() {
|
||||
item := storage.Client{}
|
||||
if err := item.UnmarshalBinary(iter.Value()); err == nil {
|
||||
v = append(v, item)
|
||||
}
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// StoredSubscriptions returns all stored subscriptions from the store.
|
||||
func (h *Hook) StoredSubscriptions() (v []storage.Subscription, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
iter, _ := h.db.NewIter(&pebbledb.IterOptions{
|
||||
LowerBound: []byte(storage.SubscriptionKey),
|
||||
UpperBound: keyUpperBound([]byte(storage.SubscriptionKey)),
|
||||
})
|
||||
|
||||
for iter.First(); iter.Valid(); iter.Next() {
|
||||
item := storage.Subscription{}
|
||||
if err := item.UnmarshalBinary(iter.Value()); err == nil {
|
||||
v = append(v, item)
|
||||
}
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// StoredRetainedMessages returns all stored retained messages from the store.
|
||||
func (h *Hook) StoredRetainedMessages() (v []storage.Message, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
iter, _ := h.db.NewIter(&pebbledb.IterOptions{
|
||||
LowerBound: []byte(storage.RetainedKey),
|
||||
UpperBound: keyUpperBound([]byte(storage.RetainedKey)),
|
||||
})
|
||||
|
||||
for iter.First(); iter.Valid(); iter.Next() {
|
||||
item := storage.Message{}
|
||||
if err := item.UnmarshalBinary(iter.Value()); err == nil {
|
||||
v = append(v, item)
|
||||
}
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// StoredInflightMessages returns all stored inflight messages from the store.
|
||||
func (h *Hook) StoredInflightMessages() (v []storage.Message, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
iter, _ := h.db.NewIter(&pebbledb.IterOptions{
|
||||
LowerBound: []byte(storage.InflightKey),
|
||||
UpperBound: keyUpperBound([]byte(storage.InflightKey)),
|
||||
})
|
||||
|
||||
for iter.First(); iter.Valid(); iter.Next() {
|
||||
item := storage.Message{}
|
||||
if err := item.UnmarshalBinary(iter.Value()); err == nil {
|
||||
v = append(v, item)
|
||||
}
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// StoredSysInfo returns the system info from the store.
|
||||
func (h *Hook) StoredSysInfo() (v storage.SystemInfo, err error) {
|
||||
if h.db == nil {
|
||||
h.Log.Error("", "error", storage.ErrDBFileNotOpen)
|
||||
return
|
||||
}
|
||||
|
||||
err = h.getKv(sysInfoKey(), &v)
|
||||
if errors.Is(err, pebbledb.ErrNotFound) {
|
||||
return v, nil
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Errorf satisfies the pebble interface for an error logger.
|
||||
func (h *Hook) Errorf(m string, v ...any) {
|
||||
h.Log.Error(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
|
||||
}
|
||||
|
||||
// Warningf satisfies the pebble interface for a warning logger.
|
||||
func (h *Hook) Warningf(m string, v ...any) {
|
||||
h.Log.Warn(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
}
|
||||
|
||||
// Infof satisfies the pebble interface for an info logger.
|
||||
func (h *Hook) Infof(m string, v ...any) {
|
||||
h.Log.Info(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
}
|
||||
|
||||
// Debugf satisfies the pebble interface for a debug logger.
|
||||
func (h *Hook) Debugf(m string, v ...any) {
|
||||
h.Log.Debug(fmt.Sprintf(strings.ToLower(strings.Trim(m, "\n")), v...), "v", v)
|
||||
}
|
||||
|
||||
// delKv deletes a key-value pair from the database.
|
||||
func (h *Hook) delKv(k string) error {
|
||||
err := h.db.Delete([]byte(k), h.mode)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to delete data", "error", err, "key", k)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// setKv stores a key-value pair in the database.
|
||||
func (h *Hook) setKv(k string, v storage.Serializable) error {
|
||||
bs, _ := v.MarshalBinary()
|
||||
err := h.db.Set([]byte(k), bs, h.mode)
|
||||
if err != nil {
|
||||
h.Log.Error("failed to update data", "error", err, "key", k)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// getKv retrieves the value associated with a key from the database.
|
||||
func (h *Hook) getKv(k string, v storage.Serializable) error {
|
||||
value, closer, err := h.db.Get([]byte(k))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if closer != nil {
|
||||
closer.Close()
|
||||
}
|
||||
}()
|
||||
return v.UnmarshalBinary(value)
|
||||
}
|
||||
812
hooks/storage/pebble/pebble_test.go
Normal file
812
hooks/storage/pebble/pebble_test.go
Normal file
@@ -0,0 +1,812 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
// SPDX-FileCopyrightText: 2022 mochi-mqtt, mochi-co
|
||||
// SPDX-FileContributor: werbenhu
|
||||
|
||||
package pebble
|
||||
|
||||
import (
|
||||
"log/slog"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pebbledb "github.com/cockroachdb/pebble"
|
||||
mqtt "github.com/mochi-mqtt/server/v2"
|
||||
"github.com/mochi-mqtt/server/v2/hooks/storage"
|
||||
"github.com/mochi-mqtt/server/v2/packets"
|
||||
"github.com/mochi-mqtt/server/v2/system"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var (
|
||||
logger = slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
|
||||
client = &mqtt.Client{
|
||||
ID: "test",
|
||||
Net: mqtt.ClientConnection{
|
||||
Remote: "test.addr",
|
||||
Listener: "listener",
|
||||
},
|
||||
Properties: mqtt.ClientProperties{
|
||||
Username: []byte("username"),
|
||||
Clean: false,
|
||||
},
|
||||
}
|
||||
|
||||
pkf = packets.Packet{Filters: packets.Subscriptions{{Filter: "a/b/c"}}}
|
||||
)
|
||||
|
||||
func teardown(t *testing.T, path string, h *Hook) {
|
||||
_ = h.Stop()
|
||||
err := os.RemoveAll("./" + strings.Replace(path, "..", "", -1))
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestKeyUpperBound(t *testing.T) {
|
||||
// Test case 1: Non-nil case
|
||||
input1 := []byte{97, 98, 99} // "abc"
|
||||
require.NotNil(t, keyUpperBound(input1))
|
||||
|
||||
// Test case 2: All bytes are 255
|
||||
input2 := []byte{255, 255, 255}
|
||||
require.Nil(t, keyUpperBound(input2))
|
||||
|
||||
// Test case 3: Empty slice
|
||||
input3 := []byte{}
|
||||
require.Nil(t, keyUpperBound(input3))
|
||||
|
||||
// Test case 4: Nil case
|
||||
input4 := []byte{255, 255, 255}
|
||||
require.Nil(t, keyUpperBound(input4))
|
||||
}
|
||||
|
||||
func TestClientKey(t *testing.T) {
|
||||
k := clientKey(&mqtt.Client{ID: "cl1"})
|
||||
require.Equal(t, storage.ClientKey+"_cl1", k)
|
||||
}
|
||||
|
||||
func TestSubscriptionKey(t *testing.T) {
|
||||
k := subscriptionKey(&mqtt.Client{ID: "cl1"}, "a/b/c")
|
||||
require.Equal(t, storage.SubscriptionKey+"_cl1:a/b/c", k)
|
||||
}
|
||||
|
||||
func TestRetainedKey(t *testing.T) {
|
||||
k := retainedKey("a/b/c")
|
||||
require.Equal(t, storage.RetainedKey+"_a/b/c", k)
|
||||
}
|
||||
|
||||
func TestInflightKey(t *testing.T) {
|
||||
k := inflightKey(&mqtt.Client{ID: "cl1"}, packets.Packet{PacketID: 1})
|
||||
require.Equal(t, storage.InflightKey+"_cl1:1", k)
|
||||
}
|
||||
|
||||
func TestSysInfoKey(t *testing.T) {
|
||||
require.Equal(t, storage.SysInfoKey, sysInfoKey())
|
||||
}
|
||||
|
||||
func TestID(t *testing.T) {
|
||||
h := new(Hook)
|
||||
require.Equal(t, "pebble-db", h.ID())
|
||||
}
|
||||
|
||||
func TestProvides(t *testing.T) {
|
||||
h := new(Hook)
|
||||
require.True(t, h.Provides(mqtt.OnSessionEstablished))
|
||||
require.True(t, h.Provides(mqtt.OnDisconnect))
|
||||
require.True(t, h.Provides(mqtt.OnSubscribed))
|
||||
require.True(t, h.Provides(mqtt.OnUnsubscribed))
|
||||
require.True(t, h.Provides(mqtt.OnRetainMessage))
|
||||
require.True(t, h.Provides(mqtt.OnQosPublish))
|
||||
require.True(t, h.Provides(mqtt.OnQosComplete))
|
||||
require.True(t, h.Provides(mqtt.OnQosDropped))
|
||||
require.True(t, h.Provides(mqtt.OnSysInfoTick))
|
||||
require.True(t, h.Provides(mqtt.StoredClients))
|
||||
require.True(t, h.Provides(mqtt.StoredInflightMessages))
|
||||
require.True(t, h.Provides(mqtt.StoredRetainedMessages))
|
||||
require.True(t, h.Provides(mqtt.StoredSubscriptions))
|
||||
require.True(t, h.Provides(mqtt.StoredSysInfo))
|
||||
require.False(t, h.Provides(mqtt.OnACLCheck))
|
||||
require.False(t, h.Provides(mqtt.OnConnectAuthenticate))
|
||||
}
|
||||
|
||||
func TestInitBadConfig(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err := h.Init(map[string]any{})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestInitErr(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err := h.Init(&Options{
|
||||
Options: &pebbledb.Options{
|
||||
ReadOnly: true,
|
||||
},
|
||||
})
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestInitUseDefaults(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
require.Equal(t, defaultDbFile, h.config.Path)
|
||||
}
|
||||
|
||||
func TestOnSessionEstablishedThenOnDisconnect(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
h.OnSessionEstablished(client, packets.Packet{})
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.getKv(clientKey(client), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.ID)
|
||||
require.Equal(t, client.Properties.Username, r.Username)
|
||||
require.Equal(t, client.Properties.Clean, r.Clean)
|
||||
require.Equal(t, client.Net.Remote, r.Remote)
|
||||
require.Equal(t, client.Net.Listener, r.Listener)
|
||||
require.NotSame(t, client, r)
|
||||
|
||||
h.OnDisconnect(client, nil, false)
|
||||
r2 := new(storage.Client)
|
||||
err = h.getKv(clientKey(client), r2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.ID)
|
||||
|
||||
h.OnDisconnect(client, nil, true)
|
||||
r3 := new(storage.Client)
|
||||
err = h.getKv(clientKey(client), r3)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, pebbledb.ErrNotFound, err)
|
||||
require.Empty(t, r3.ID)
|
||||
|
||||
}
|
||||
|
||||
func TestOnClientExpired(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
cl := &mqtt.Client{ID: "cl1"}
|
||||
clientKey := clientKey(cl)
|
||||
|
||||
err = h.setKv(clientKey, &storage.Client{ID: cl.ID})
|
||||
require.NoError(t, err)
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.getKv(clientKey, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, cl.ID, r.ID)
|
||||
|
||||
h.OnClientExpired(cl)
|
||||
err = h.getKv(clientKey, r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, pebbledb.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestOnClientExpiredNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnClientExpired(client)
|
||||
}
|
||||
|
||||
func TestOnClientExpiredClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnClientExpired(client)
|
||||
}
|
||||
|
||||
func TestOnSessionEstablishedNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnSessionEstablished(client, packets.Packet{})
|
||||
}
|
||||
|
||||
func TestOnSessionEstablishedClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnSessionEstablished(client, packets.Packet{})
|
||||
}
|
||||
|
||||
func TestOnWillSent(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
c1 := client
|
||||
c1.Properties.Will.Flag = 1
|
||||
h.OnWillSent(c1, packets.Packet{})
|
||||
|
||||
r := new(storage.Client)
|
||||
err = h.getKv(clientKey(client), r)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, uint32(1), r.Will.Flag)
|
||||
require.NotSame(t, client, r)
|
||||
}
|
||||
|
||||
func TestOnDisconnectNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnDisconnect(client, nil, false)
|
||||
}
|
||||
|
||||
func TestOnDisconnectClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnDisconnect(client, nil, false)
|
||||
}
|
||||
|
||||
func TestOnDisconnectSessionTakenOver(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
testClient := &mqtt.Client{
|
||||
ID: "test",
|
||||
Net: mqtt.ClientConnection{
|
||||
Remote: "test.addr",
|
||||
Listener: "listener",
|
||||
},
|
||||
Properties: mqtt.ClientProperties{
|
||||
Username: []byte("username"),
|
||||
Clean: false,
|
||||
},
|
||||
}
|
||||
|
||||
testClient.Stop(packets.ErrSessionTakenOver)
|
||||
h.OnDisconnect(testClient, nil, true)
|
||||
teardown(t, h.config.Path, h)
|
||||
}
|
||||
|
||||
func TestOnSubscribedThenOnUnsubscribed(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
h.OnSubscribed(client, pkf, []byte{0})
|
||||
r := new(storage.Subscription)
|
||||
|
||||
err = h.getKv(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, client.ID, r.Client)
|
||||
require.Equal(t, pkf.Filters[0].Filter, r.Filter)
|
||||
require.Equal(t, byte(0), r.Qos)
|
||||
|
||||
h.OnUnsubscribed(client, pkf)
|
||||
err = h.getKv(subscriptionKey(client, pkf.Filters[0].Filter), r)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, pebbledb.ErrNotFound, err)
|
||||
}
|
||||
|
||||
func TestOnSubscribedNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnSubscribed(client, pkf, []byte{0})
|
||||
}
|
||||
|
||||
func TestOnSubscribedClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnSubscribed(client, pkf, []byte{0})
|
||||
}
|
||||
|
||||
func TestOnUnsubscribedNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnUnsubscribed(client, pkf)
|
||||
}
|
||||
|
||||
func TestOnUnsubscribedClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnUnsubscribed(client, pkf)
|
||||
}
|
||||
|
||||
func TestOnRetainMessageThenUnset(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
pk := packets.Packet{
|
||||
FixedHeader: packets.FixedHeader{
|
||||
Retain: true,
|
||||
},
|
||||
Payload: []byte("hello"),
|
||||
TopicName: "a/b/c",
|
||||
}
|
||||
|
||||
h.OnRetainMessage(client, pk, 1)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pk.TopicName, r.TopicName)
|
||||
require.Equal(t, pk.Payload, r.Payload)
|
||||
|
||||
h.OnRetainMessage(client, pk, -1)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, pebbledb.ErrNotFound)
|
||||
|
||||
// coverage: delete deleted
|
||||
h.OnRetainMessage(client, pk, -1)
|
||||
err = h.getKv(retainedKey(pk.TopicName), r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, pebbledb.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestOnRetainedExpired(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
m := &storage.Message{
|
||||
ID: retainedKey("a/b/c"),
|
||||
T: storage.RetainedKey,
|
||||
TopicName: "a/b/c",
|
||||
}
|
||||
|
||||
err = h.setKv(m.ID, m)
|
||||
require.NoError(t, err)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.getKv(m.ID, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, m.TopicName, r.TopicName)
|
||||
|
||||
h.OnRetainedExpired(m.TopicName)
|
||||
err = h.getKv(m.ID, r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, pebbledb.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestOnRetainExpiredNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnRetainedExpired("a/b/c")
|
||||
}
|
||||
|
||||
func TestOnRetainExpiredClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnRetainedExpired("a/b/c")
|
||||
}
|
||||
|
||||
func TestOnRetainMessageNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnRetainMessage(client, packets.Packet{}, 0)
|
||||
}
|
||||
|
||||
func TestOnRetainMessageClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnRetainMessage(client, packets.Packet{}, 0)
|
||||
}
|
||||
|
||||
func TestOnQosPublishThenQOSComplete(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
pk := packets.Packet{
|
||||
FixedHeader: packets.FixedHeader{
|
||||
Retain: true,
|
||||
Qos: 2,
|
||||
},
|
||||
Payload: []byte("hello"),
|
||||
TopicName: "a/b/c",
|
||||
}
|
||||
|
||||
h.OnQosPublish(client, pk, time.Now().Unix(), 0)
|
||||
|
||||
r := new(storage.Message)
|
||||
err = h.getKv(inflightKey(client, pk), r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pk.TopicName, r.TopicName)
|
||||
require.Equal(t, pk.Payload, r.Payload)
|
||||
|
||||
// ensure dates are properly saved
|
||||
require.True(t, r.Sent > 0)
|
||||
require.True(t, time.Now().Unix()-1 < r.Sent)
|
||||
|
||||
// OnQosDropped is a passthrough to OnQosComplete here
|
||||
h.OnQosDropped(client, pk)
|
||||
err = h.getKv(inflightKey(client, pk), r)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, pebbledb.ErrNotFound)
|
||||
}
|
||||
|
||||
func TestOnQosPublishNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnQosPublish(client, packets.Packet{}, time.Now().Unix(), 0)
|
||||
}
|
||||
|
||||
func TestOnQosPublishClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnQosPublish(client, packets.Packet{}, time.Now().Unix(), 0)
|
||||
}
|
||||
|
||||
func TestOnQosCompleteNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnQosComplete(client, packets.Packet{})
|
||||
}
|
||||
|
||||
func TestOnQosCompleteClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnQosComplete(client, packets.Packet{})
|
||||
}
|
||||
|
||||
func TestOnQosDroppedNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnQosDropped(client, packets.Packet{})
|
||||
}
|
||||
|
||||
func TestOnSysInfoTick(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
info := &system.Info{
|
||||
Version: "2.0.0",
|
||||
BytesReceived: 100,
|
||||
}
|
||||
|
||||
h.OnSysInfoTick(info)
|
||||
|
||||
r := new(storage.SystemInfo)
|
||||
err = h.getKv(storage.SysInfoKey, r)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, info.Version, r.Version)
|
||||
require.Equal(t, info.BytesReceived, r.BytesReceived)
|
||||
require.NotSame(t, info, r)
|
||||
}
|
||||
|
||||
func TestOnSysInfoTickNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.OnSysInfoTick(new(system.Info))
|
||||
}
|
||||
|
||||
func TestOnSysInfoTickClosedDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
teardown(t, h.config.Path, h)
|
||||
h.OnSysInfoTick(new(system.Info))
|
||||
}
|
||||
|
||||
func TestStoredClients(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with clients
|
||||
err = h.setKv(storage.ClientKey+"_cl1", &storage.Client{ID: "cl1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.ClientKey+"_cl2", &storage.Client{ID: "cl2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.ClientKey+"_cl3", &storage.Client{ID: "cl3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredClients()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, r, 3)
|
||||
require.Equal(t, "cl1", r[0].ID)
|
||||
require.Equal(t, "cl2", r[1].ID)
|
||||
require.Equal(t, "cl3", r[2].ID)
|
||||
}
|
||||
|
||||
func TestStoredClientsNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredClients()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestStoredSubscriptions(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with subscriptions
|
||||
err = h.setKv(storage.SubscriptionKey+"_sub1", &storage.Subscription{ID: "sub1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.SubscriptionKey+"_sub2", &storage.Subscription{ID: "sub2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.SubscriptionKey+"_sub3", &storage.Subscription{ID: "sub3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredSubscriptions()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, r, 3)
|
||||
require.Equal(t, "sub1", r[0].ID)
|
||||
require.Equal(t, "sub2", r[1].ID)
|
||||
require.Equal(t, "sub3", r[2].ID)
|
||||
}
|
||||
|
||||
func TestStoredSubscriptionsNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredSubscriptions()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestStoredRetainedMessages(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with messages
|
||||
err = h.setKv(storage.RetainedKey+"_m1", &storage.Message{ID: "m1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.RetainedKey+"_m2", &storage.Message{ID: "m2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.RetainedKey+"_m3", &storage.Message{ID: "m3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.InflightKey+"_i3", &storage.Message{ID: "i3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredRetainedMessages()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, r, 3)
|
||||
require.Equal(t, "m1", r[0].ID)
|
||||
require.Equal(t, "m2", r[1].ID)
|
||||
require.Equal(t, "m3", r[2].ID)
|
||||
}
|
||||
|
||||
func TestStoredRetainedMessagesNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredRetainedMessages()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestStoredInflightMessages(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
// populate with messages
|
||||
err = h.setKv(storage.InflightKey+"_i1", &storage.Message{ID: "i1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.InflightKey+"_i2", &storage.Message{ID: "i2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.InflightKey+"_i3", &storage.Message{ID: "i3"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv(storage.RetainedKey+"_m1", &storage.Message{ID: "m1"})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err := h.StoredInflightMessages()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, r, 3)
|
||||
require.Equal(t, "i1", r[0].ID)
|
||||
require.Equal(t, "i2", r[1].ID)
|
||||
require.Equal(t, "i3", r[2].ID)
|
||||
}
|
||||
|
||||
func TestStoredInflightMessagesNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredInflightMessages()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestStoredSysInfo(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(nil)
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
r, err := h.StoredSysInfo()
|
||||
require.NoError(t, err)
|
||||
|
||||
// populate with messages
|
||||
err = h.setKv(storage.SysInfoKey, &storage.SystemInfo{
|
||||
ID: storage.SysInfoKey,
|
||||
Info: system.Info{
|
||||
Version: "2.0.0",
|
||||
},
|
||||
T: storage.SysInfoKey,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
r, err = h.StoredSysInfo()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "2.0.0", r.Info.Version)
|
||||
}
|
||||
|
||||
func TestStoredSysInfoNoDB(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
v, err := h.StoredSysInfo()
|
||||
require.Empty(t, v)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestErrorf(t *testing.T) {
|
||||
// coverage: one day check log hook
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.Errorf("test", 1, 2, 3)
|
||||
}
|
||||
|
||||
func TestWarningf(t *testing.T) {
|
||||
// coverage: one day check log hook
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.Warningf("test", 1, 2, 3)
|
||||
}
|
||||
|
||||
func TestInfof(t *testing.T) {
|
||||
// coverage: one day check log hook
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.Infof("test", 1, 2, 3)
|
||||
}
|
||||
|
||||
func TestDebugf(t *testing.T) {
|
||||
// coverage: one day check log hook
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
h.Debugf("test", 1, 2, 3)
|
||||
}
|
||||
|
||||
func TestGetSetDelKv(t *testing.T) {
|
||||
opts := []struct {
|
||||
name string
|
||||
opt *Options
|
||||
}{
|
||||
{
|
||||
name: "NoSync",
|
||||
opt: &Options{
|
||||
Mode: NoSync,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Sync",
|
||||
opt: &Options{
|
||||
Mode: Sync,
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range opts {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
err := h.Init(tt.opt)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv("testKey", &storage.Client{ID: "testId"})
|
||||
require.NoError(t, err)
|
||||
|
||||
var obj storage.Client
|
||||
err = h.getKv("testKey", &obj)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.delKv("testKey")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.getKv("testKey", &obj)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, pebbledb.ErrNotFound, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetSetDelKvErr(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err := h.Init(&Options{
|
||||
Mode: Sync,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = h.setKv("testKey", &storage.Client{ID: "testId"})
|
||||
require.NoError(t, err)
|
||||
h.Stop()
|
||||
|
||||
h = new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err = h.Init(&Options{
|
||||
Mode: Sync,
|
||||
Options: &pebbledb.Options{
|
||||
ReadOnly: true,
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
defer teardown(t, h.config.Path, h)
|
||||
|
||||
err = h.setKv("testKey", &storage.Client{ID: "testId"})
|
||||
require.Error(t, err)
|
||||
|
||||
err = h.delKv("testKey")
|
||||
require.Error(t, err)
|
||||
}
|
||||
@@ -51,8 +51,12 @@ func sysInfoKey() string {
|
||||
|
||||
// Options contains configuration settings for the bolt instance.
|
||||
type Options struct {
|
||||
HPrefix string
|
||||
Options *redis.Options
|
||||
Address string `yaml:"address" json:"address"`
|
||||
Username string `yaml:"username" json:"username"`
|
||||
Password string `yaml:"password" json:"password"`
|
||||
Database int `yaml:"database" json:"database"`
|
||||
HPrefix string `yaml:"h_prefix" json:"h_prefix"`
|
||||
Options *redis.Options
|
||||
}
|
||||
|
||||
// Hook is a persistent storage hook based using Redis as a backend.
|
||||
@@ -105,23 +109,31 @@ func (h *Hook) Init(config any) error {
|
||||
h.ctx = context.Background()
|
||||
|
||||
if config == nil {
|
||||
config = &Options{
|
||||
Options: &redis.Options{
|
||||
Addr: defaultAddr,
|
||||
},
|
||||
config = new(Options)
|
||||
}
|
||||
h.config = config.(*Options)
|
||||
if h.config.Options == nil {
|
||||
h.config.Options = &redis.Options{
|
||||
Addr: defaultAddr,
|
||||
}
|
||||
h.config.Options.Addr = h.config.Address
|
||||
h.config.Options.DB = h.config.Database
|
||||
h.config.Options.Username = h.config.Username
|
||||
h.config.Options.Password = h.config.Password
|
||||
}
|
||||
|
||||
h.config = config.(*Options)
|
||||
if h.config.HPrefix == "" {
|
||||
h.config.HPrefix = defaultHPrefix
|
||||
}
|
||||
|
||||
h.Log.Info("connecting to redis service",
|
||||
h.Log.Info(
|
||||
"connecting to redis service",
|
||||
"prefix", h.config.HPrefix,
|
||||
"address", h.config.Options.Addr,
|
||||
"username", h.config.Options.Username,
|
||||
"password-len", len(h.config.Options.Password),
|
||||
"db", h.config.Options.DB)
|
||||
"db", h.config.Options.DB,
|
||||
)
|
||||
|
||||
h.db = redis.NewClient(h.config.Options)
|
||||
_, err := h.db.Ping(context.Background()).Result()
|
||||
@@ -275,6 +287,7 @@ func (h *Hook) OnRetainMessage(cl *mqtt.Client, pk packets.Packet, r int64) {
|
||||
TopicName: pk.TopicName,
|
||||
Payload: pk.Payload,
|
||||
Created: pk.Created,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
Properties: storage.MessageProperties{
|
||||
PayloadFormat: props.PayloadFormat,
|
||||
@@ -305,6 +318,7 @@ func (h *Hook) OnQosPublish(cl *mqtt.Client, pk packets.Packet, sent int64, rese
|
||||
in := &storage.Message{
|
||||
ID: inflightKey(cl, pk),
|
||||
T: storage.InflightKey,
|
||||
Client: cl.ID,
|
||||
Origin: pk.Origin,
|
||||
FixedHeader: pk.FixedHeader,
|
||||
TopicName: pk.TopicName,
|
||||
|
||||
@@ -135,6 +135,29 @@ func TestInitUseDefaults(t *testing.T) {
|
||||
require.Equal(t, defaultAddr, h.config.Options.Addr)
|
||||
}
|
||||
|
||||
func TestInitUsePassConfig(t *testing.T) {
|
||||
s := miniredis.RunT(t)
|
||||
s.StartAddr(defaultAddr)
|
||||
defer s.Close()
|
||||
|
||||
h := newHook(t, "")
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
err := h.Init(&Options{
|
||||
Address: defaultAddr,
|
||||
Username: "username",
|
||||
Password: "password",
|
||||
Database: 2,
|
||||
})
|
||||
require.Error(t, err)
|
||||
h.db.FlushAll(h.ctx)
|
||||
|
||||
require.Equal(t, defaultAddr, h.config.Options.Addr)
|
||||
require.Equal(t, "username", h.config.Options.Username)
|
||||
require.Equal(t, "password", h.config.Options.Password)
|
||||
require.Equal(t, 2, h.config.Options.DB)
|
||||
}
|
||||
|
||||
func TestInitBadConfig(t *testing.T) {
|
||||
h := new(Hook)
|
||||
h.SetOpts(logger, nil)
|
||||
|
||||
@@ -25,6 +25,12 @@ var (
|
||||
ErrDBFileNotOpen = errors.New("db file not open")
|
||||
)
|
||||
|
||||
// Serializable is an interface for objects that can be serialized and deserialized.
|
||||
type Serializable interface {
|
||||
UnmarshalBinary([]byte) error
|
||||
MarshalBinary() (data []byte, err error)
|
||||
}
|
||||
|
||||
// Client is a storable representation of an MQTT client.
|
||||
type Client struct {
|
||||
Will ClientWill `json:"will"` // will topic and payload data if applicable
|
||||
@@ -40,28 +46,28 @@ type Client struct {
|
||||
|
||||
// ClientProperties contains a limited set of the mqtt v5 properties specific to a client connection.
|
||||
type ClientProperties struct {
|
||||
AuthenticationData []byte `json:"authenticationData"`
|
||||
User []packets.UserProperty `json:"user"`
|
||||
AuthenticationMethod string `json:"authenticationMethod"`
|
||||
SessionExpiryInterval uint32 `json:"sessionExpiryInterval"`
|
||||
MaximumPacketSize uint32 `json:"maximumPacketSize"`
|
||||
ReceiveMaximum uint16 `json:"receiveMaximum"`
|
||||
TopicAliasMaximum uint16 `json:"topicAliasMaximum"`
|
||||
SessionExpiryIntervalFlag bool `json:"sessionExpiryIntervalFlag"`
|
||||
RequestProblemInfo byte `json:"requestProblemInfo"`
|
||||
RequestProblemInfoFlag bool `json:"requestProblemInfoFlag"`
|
||||
RequestResponseInfo byte `json:"requestResponseInfo"`
|
||||
AuthenticationData []byte `json:"authenticationData,omitempty"`
|
||||
User []packets.UserProperty `json:"user,omitempty"`
|
||||
AuthenticationMethod string `json:"authenticationMethod,omitempty"`
|
||||
SessionExpiryInterval uint32 `json:"sessionExpiryInterval,omitempty"`
|
||||
MaximumPacketSize uint32 `json:"maximumPacketSize,omitempty"`
|
||||
ReceiveMaximum uint16 `json:"receiveMaximum,omitempty"`
|
||||
TopicAliasMaximum uint16 `json:"topicAliasMaximum,omitempty"`
|
||||
SessionExpiryIntervalFlag bool `json:"sessionExpiryIntervalFlag,omitempty"`
|
||||
RequestProblemInfo byte `json:"requestProblemInfo,omitempty"`
|
||||
RequestProblemInfoFlag bool `json:"requestProblemInfoFlag,omitempty"`
|
||||
RequestResponseInfo byte `json:"requestResponseInfo,omitempty"`
|
||||
}
|
||||
|
||||
// ClientWill contains a will message for a client, and limited mqtt v5 properties.
|
||||
type ClientWill struct {
|
||||
Payload []byte `json:"payload"`
|
||||
User []packets.UserProperty `json:"user"`
|
||||
TopicName string `json:"topicName"`
|
||||
Flag uint32 `json:"flag"`
|
||||
WillDelayInterval uint32 `json:"willDelayInterval"`
|
||||
Qos byte `json:"qos"`
|
||||
Retain bool `json:"retain"`
|
||||
Payload []byte `json:"payload,omitempty"`
|
||||
User []packets.UserProperty `json:"user,omitempty"`
|
||||
TopicName string `json:"topicName,omitempty"`
|
||||
Flag uint32 `json:"flag,omitempty"`
|
||||
WillDelayInterval uint32 `json:"willDelayInterval,omitempty"`
|
||||
Qos byte `json:"qos,omitempty"`
|
||||
Retain bool `json:"retain,omitempty"`
|
||||
}
|
||||
|
||||
// MarshalBinary encodes the values into a json string.
|
||||
@@ -79,29 +85,30 @@ func (d *Client) UnmarshalBinary(data []byte) error {
|
||||
|
||||
// Message is a storable representation of an MQTT message (specifically publish).
|
||||
type Message struct {
|
||||
Properties MessageProperties `json:"properties"` // -
|
||||
Payload []byte `json:"payload"` // the message payload (if retained)
|
||||
T string `json:"t"` // the data type
|
||||
ID string `json:"id" storm:"id"` // the storage key
|
||||
Origin string `json:"origin"` // the id of the client who sent the message
|
||||
TopicName string `json:"topic_name"` // the topic the message was sent to (if retained)
|
||||
FixedHeader packets.FixedHeader `json:"fixedheader"` // the header properties of the message
|
||||
Created int64 `json:"created"` // the time the message was created in unixtime
|
||||
Sent int64 `json:"sent"` // the last time the message was sent (for retries) in unixtime (if inflight)
|
||||
PacketID uint16 `json:"packet_id"` // the unique id of the packet (if inflight)
|
||||
Properties MessageProperties `json:"properties"` // -
|
||||
Payload []byte `json:"payload"` // the message payload (if retained)
|
||||
T string `json:"t,omitempty"` // the data type
|
||||
ID string `json:"id,omitempty" storm:"id"` // the storage key
|
||||
Client string `json:"client,omitempty"` // the client id the message is for
|
||||
Origin string `json:"origin,omitempty"` // the id of the client who sent the message
|
||||
TopicName string `json:"topic_name,omitempty"` // the topic the message was sent to (if retained)
|
||||
FixedHeader packets.FixedHeader `json:"fixedheader"` // the header properties of the message
|
||||
Created int64 `json:"created,omitempty"` // the time the message was created in unixtime
|
||||
Sent int64 `json:"sent,omitempty"` // the last time the message was sent (for retries) in unixtime (if inflight)
|
||||
PacketID uint16 `json:"packet_id,omitempty"` // the unique id of the packet (if inflight)
|
||||
}
|
||||
|
||||
// MessageProperties contains a limited subset of mqtt v5 properties specific to publish messages.
|
||||
type MessageProperties struct {
|
||||
CorrelationData []byte `json:"correlationData"`
|
||||
SubscriptionIdentifier []int `json:"subscriptionIdentifier"`
|
||||
User []packets.UserProperty `json:"user"`
|
||||
ContentType string `json:"contentType"`
|
||||
ResponseTopic string `json:"responseTopic"`
|
||||
MessageExpiryInterval uint32 `json:"messageExpiry"`
|
||||
TopicAlias uint16 `json:"topicAlias"`
|
||||
PayloadFormat byte `json:"payloadFormat"`
|
||||
PayloadFormatFlag bool `json:"payloadFormatFlag"`
|
||||
CorrelationData []byte `json:"correlationData,omitempty"`
|
||||
SubscriptionIdentifier []int `json:"subscriptionIdentifier,omitempty"`
|
||||
User []packets.UserProperty `json:"user,omitempty"`
|
||||
ContentType string `json:"contentType,omitempty"`
|
||||
ResponseTopic string `json:"responseTopic,omitempty"`
|
||||
MessageExpiryInterval uint32 `json:"messageExpiry,omitempty"`
|
||||
TopicAlias uint16 `json:"topicAlias,omitempty"`
|
||||
PayloadFormat byte `json:"payloadFormat,omitempty"`
|
||||
PayloadFormatFlag bool `json:"payloadFormatFlag,omitempty"`
|
||||
}
|
||||
|
||||
// MarshalBinary encodes the values into a json string.
|
||||
@@ -149,15 +156,15 @@ func (d *Message) ToPacket() packets.Packet {
|
||||
|
||||
// Subscription is a storable representation of an MQTT subscription.
|
||||
type Subscription struct {
|
||||
T string `json:"t"`
|
||||
ID string `json:"id" storm:"id"`
|
||||
Client string `json:"client"`
|
||||
T string `json:"t,omitempty"`
|
||||
ID string `json:"id,omitempty" storm:"id"`
|
||||
Client string `json:"client,omitempty"`
|
||||
Filter string `json:"filter"`
|
||||
Identifier int `json:"identifier"`
|
||||
RetainHandling byte `json:"retain_handling"`
|
||||
Identifier int `json:"identifier,omitempty"`
|
||||
RetainHandling byte `json:"retain_handling,omitempty"`
|
||||
Qos byte `json:"qos"`
|
||||
RetainAsPublished bool `json:"retain_as_pub"`
|
||||
NoLocal bool `json:"no_local"`
|
||||
RetainAsPublished bool `json:"retain_as_pub,omitempty"`
|
||||
NoLocal bool `json:"no_local,omitempty"`
|
||||
}
|
||||
|
||||
// MarshalBinary encodes the values into a json string.
|
||||
|
||||
@@ -89,7 +89,7 @@ var (
|
||||
Filter: "a/b/c",
|
||||
Qos: 1,
|
||||
}
|
||||
subscriptionJSON = []byte(`{"t":"subscription","id":"id","client":"mochi","filter":"a/b/c","identifier":0,"retain_handling":0,"qos":1,"retain_as_pub":false,"no_local":false}`)
|
||||
subscriptionJSON = []byte(`{"t":"subscription","id":"id","client":"mochi","filter":"a/b/c","qos":1}`)
|
||||
|
||||
sysInfoStruct = SystemInfo{
|
||||
T: "info",
|
||||
|
||||
@@ -13,24 +13,23 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
const TypeHealthCheck = "healthcheck"
|
||||
|
||||
// HTTPHealthCheck is a listener for providing an HTTP healthcheck endpoint.
|
||||
type HTTPHealthCheck struct {
|
||||
sync.RWMutex
|
||||
id string // the internal id of the listener
|
||||
address string // the network address to bind to
|
||||
config *Config // configuration values for the listener
|
||||
config Config // configuration values for the listener
|
||||
listen *http.Server // the http server
|
||||
end uint32 // ensure the close methods are only called once
|
||||
}
|
||||
|
||||
// NewHTTPHealthCheck initialises and returns a new HTTP listener, listening on an address.
|
||||
func NewHTTPHealthCheck(id, address string, config *Config) *HTTPHealthCheck {
|
||||
if config == nil {
|
||||
config = new(Config)
|
||||
}
|
||||
// NewHTTPHealthCheck initializes and returns a new HTTP listener, listening on an address.
|
||||
func NewHTTPHealthCheck(config Config) *HTTPHealthCheck {
|
||||
return &HTTPHealthCheck{
|
||||
id: id,
|
||||
address: address,
|
||||
id: config.ID,
|
||||
address: config.Address,
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,47 +14,44 @@ import (
|
||||
)
|
||||
|
||||
func TestNewHTTPHealthCheck(t *testing.T) {
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, nil)
|
||||
require.Equal(t, "healthcheck", l.id)
|
||||
require.Equal(t, testAddr, l.address)
|
||||
l := NewHTTPHealthCheck(basicConfig)
|
||||
require.Equal(t, basicConfig.ID, l.id)
|
||||
require.Equal(t, basicConfig.Address, l.address)
|
||||
}
|
||||
|
||||
func TestHTTPHealthCheckID(t *testing.T) {
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, nil)
|
||||
require.Equal(t, "healthcheck", l.ID())
|
||||
l := NewHTTPHealthCheck(basicConfig)
|
||||
require.Equal(t, basicConfig.ID, l.ID())
|
||||
}
|
||||
|
||||
func TestHTTPHealthCheckAddress(t *testing.T) {
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, nil)
|
||||
require.Equal(t, testAddr, l.Address())
|
||||
l := NewHTTPHealthCheck(basicConfig)
|
||||
require.Equal(t, basicConfig.Address, l.Address())
|
||||
}
|
||||
|
||||
func TestHTTPHealthCheckProtocol(t *testing.T) {
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, nil)
|
||||
l := NewHTTPHealthCheck(basicConfig)
|
||||
require.Equal(t, "http", l.Protocol())
|
||||
}
|
||||
|
||||
func TestHTTPHealthCheckTLSProtocol(t *testing.T) {
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
|
||||
l := NewHTTPHealthCheck(tlsConfig)
|
||||
_ = l.Init(logger)
|
||||
require.Equal(t, "https", l.Protocol())
|
||||
}
|
||||
|
||||
func TestHTTPHealthCheckInit(t *testing.T) {
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, nil)
|
||||
l := NewHTTPHealthCheck(basicConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NotNil(t, l.listen)
|
||||
require.Equal(t, testAddr, l.listen.Addr)
|
||||
require.Equal(t, basicConfig.Address, l.listen.Addr)
|
||||
}
|
||||
|
||||
func TestHTTPHealthCheckServeAndClose(t *testing.T) {
|
||||
// setup http stats listener
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, nil)
|
||||
l := NewHTTPHealthCheck(basicConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -90,7 +87,7 @@ func TestHTTPHealthCheckServeAndClose(t *testing.T) {
|
||||
|
||||
func TestHTTPHealthCheckServeAndCloseMethodNotAllowed(t *testing.T) {
|
||||
// setup http stats listener
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, nil)
|
||||
l := NewHTTPHealthCheck(basicConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -125,10 +122,7 @@ func TestHTTPHealthCheckServeAndCloseMethodNotAllowed(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestHTTPHealthCheckServeTLSAndClose(t *testing.T) {
|
||||
l := NewHTTPHealthCheck("healthcheck", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
|
||||
l := NewHTTPHealthCheck(tlsConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
||||
@@ -17,27 +17,26 @@ import (
|
||||
"github.com/mochi-mqtt/server/v2/system"
|
||||
)
|
||||
|
||||
const TypeSysInfo = "sysinfo"
|
||||
|
||||
// HTTPStats is a listener for presenting the server $SYS stats on a JSON http endpoint.
|
||||
type HTTPStats struct {
|
||||
sync.RWMutex
|
||||
id string // the internal id of the listener
|
||||
address string // the network address to bind to
|
||||
config *Config // configuration values for the listener
|
||||
config Config // configuration values for the listener
|
||||
listen *http.Server // the http server
|
||||
sysInfo *system.Info // pointers to the server data
|
||||
log *slog.Logger // server logger
|
||||
end uint32 // ensure the close methods are only called once
|
||||
}
|
||||
|
||||
// NewHTTPStats initialises and returns a new HTTP listener, listening on an address.
|
||||
func NewHTTPStats(id, address string, config *Config, sysInfo *system.Info) *HTTPStats {
|
||||
if config == nil {
|
||||
config = new(Config)
|
||||
}
|
||||
// NewHTTPStats initializes and returns a new HTTP listener, listening on an address.
|
||||
func NewHTTPStats(config Config, sysInfo *system.Info) *HTTPStats {
|
||||
return &HTTPStats{
|
||||
id: id,
|
||||
address: address,
|
||||
sysInfo: sysInfo,
|
||||
id: config.ID,
|
||||
address: config.Address,
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,38 +17,35 @@ import (
|
||||
)
|
||||
|
||||
func TestNewHTTPStats(t *testing.T) {
|
||||
l := NewHTTPStats("t1", testAddr, nil, nil)
|
||||
l := NewHTTPStats(basicConfig, nil)
|
||||
require.Equal(t, "t1", l.id)
|
||||
require.Equal(t, testAddr, l.address)
|
||||
}
|
||||
|
||||
func TestHTTPStatsID(t *testing.T) {
|
||||
l := NewHTTPStats("t1", testAddr, nil, nil)
|
||||
l := NewHTTPStats(basicConfig, nil)
|
||||
require.Equal(t, "t1", l.ID())
|
||||
}
|
||||
|
||||
func TestHTTPStatsAddress(t *testing.T) {
|
||||
l := NewHTTPStats("t1", testAddr, nil, nil)
|
||||
l := NewHTTPStats(basicConfig, nil)
|
||||
require.Equal(t, testAddr, l.Address())
|
||||
}
|
||||
|
||||
func TestHTTPStatsProtocol(t *testing.T) {
|
||||
l := NewHTTPStats("t1", testAddr, nil, nil)
|
||||
l := NewHTTPStats(basicConfig, nil)
|
||||
require.Equal(t, "http", l.Protocol())
|
||||
}
|
||||
|
||||
func TestHTTPStatsTLSProtocol(t *testing.T) {
|
||||
l := NewHTTPStats("t1", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
}, nil)
|
||||
|
||||
l := NewHTTPStats(tlsConfig, nil)
|
||||
_ = l.Init(logger)
|
||||
require.Equal(t, "https", l.Protocol())
|
||||
}
|
||||
|
||||
func TestHTTPStatsInit(t *testing.T) {
|
||||
sysInfo := new(system.Info)
|
||||
l := NewHTTPStats("t1", testAddr, nil, sysInfo)
|
||||
l := NewHTTPStats(basicConfig, sysInfo)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -64,7 +61,7 @@ func TestHTTPStatsServeAndClose(t *testing.T) {
|
||||
}
|
||||
|
||||
// setup http stats listener
|
||||
l := NewHTTPStats("t1", testAddr, nil, sysInfo)
|
||||
l := NewHTTPStats(basicConfig, sysInfo)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -109,9 +106,7 @@ func TestHTTPStatsServeTLSAndClose(t *testing.T) {
|
||||
Version: "test",
|
||||
}
|
||||
|
||||
l := NewHTTPStats("t1", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
}, sysInfo)
|
||||
l := NewHTTPStats(tlsConfig, sysInfo)
|
||||
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
@@ -132,7 +127,9 @@ func TestHTTPStatsFailedToServe(t *testing.T) {
|
||||
}
|
||||
|
||||
// setup http stats listener
|
||||
l := NewHTTPStats("t1", "wrong_addr", nil, sysInfo)
|
||||
config := basicConfig
|
||||
config.Address = "wrong_addr"
|
||||
l := NewHTTPStats(config, sysInfo)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
||||
@@ -14,8 +14,10 @@ import (
|
||||
|
||||
// Config contains configuration values for a listener.
|
||||
type Config struct {
|
||||
// TLSConfig is a tls.Config configuration to be used with the listener.
|
||||
// See examples folder for basic and mutual-tls use.
|
||||
Type string
|
||||
ID string
|
||||
Address string
|
||||
// TLSConfig is a tls.Config configuration to be used with the listener. See examples folder for basic and mutual-tls use.
|
||||
TLSConfig *tls.Config
|
||||
}
|
||||
|
||||
|
||||
@@ -19,6 +19,9 @@ import (
|
||||
const testAddr = ":22222"
|
||||
|
||||
var (
|
||||
basicConfig = Config{ID: "t1", Address: testAddr}
|
||||
tlsConfig = Config{ID: "t1", Address: testAddr, TLSConfig: tlsConfigBasic}
|
||||
|
||||
logger = slog.New(slog.NewTextHandler(os.Stdout, nil))
|
||||
|
||||
testCertificate = []byte(`-----BEGIN CERTIFICATE-----
|
||||
@@ -65,6 +68,7 @@ func init() {
|
||||
MinVersion: tls.VersionTLS12,
|
||||
Certificates: []tls.Certificate{cert},
|
||||
}
|
||||
tlsConfig.TLSConfig = tlsConfigBasic
|
||||
}
|
||||
|
||||
func TestNew(t *testing.T) {
|
||||
|
||||
@@ -12,6 +12,8 @@ import (
|
||||
"log/slog"
|
||||
)
|
||||
|
||||
const TypeMock = "mock"
|
||||
|
||||
// MockEstablisher is a function signature which can be used in testing.
|
||||
func MockEstablisher(id string, c net.Conn) error {
|
||||
return nil
|
||||
|
||||
@@ -13,26 +13,24 @@ import (
|
||||
"log/slog"
|
||||
)
|
||||
|
||||
const TypeTCP = "tcp"
|
||||
|
||||
// TCP is a listener for establishing client connections on basic TCP protocol.
|
||||
type TCP struct { // [MQTT-4.2.0-1]
|
||||
sync.RWMutex
|
||||
id string // the internal id of the listener
|
||||
address string // the network address to bind to
|
||||
listen net.Listener // a net.Listener which will listen for new clients
|
||||
config *Config // configuration values for the listener
|
||||
config Config // configuration values for the listener
|
||||
log *slog.Logger // server logger
|
||||
end uint32 // ensure the close methods are only called once
|
||||
}
|
||||
|
||||
// NewTCP initialises and returns a new TCP listener, listening on an address.
|
||||
func NewTCP(id, address string, config *Config) *TCP {
|
||||
if config == nil {
|
||||
config = new(Config)
|
||||
}
|
||||
|
||||
// NewTCP initializes and returns a new TCP listener, listening on an address.
|
||||
func NewTCP(config Config) *TCP {
|
||||
return &TCP{
|
||||
id: id,
|
||||
address: address,
|
||||
id: config.ID,
|
||||
address: config.Address,
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
@@ -44,6 +42,9 @@ func (l *TCP) ID() string {
|
||||
|
||||
// Address returns the address of the listener.
|
||||
func (l *TCP) Address() string {
|
||||
if l.listen != nil {
|
||||
return l.listen.Addr().String()
|
||||
}
|
||||
return l.address
|
||||
}
|
||||
|
||||
|
||||
@@ -14,45 +14,40 @@ import (
|
||||
)
|
||||
|
||||
func TestNewTCP(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, nil)
|
||||
l := NewTCP(basicConfig)
|
||||
require.Equal(t, "t1", l.id)
|
||||
require.Equal(t, testAddr, l.address)
|
||||
}
|
||||
|
||||
func TestTCPID(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, nil)
|
||||
l := NewTCP(basicConfig)
|
||||
require.Equal(t, "t1", l.ID())
|
||||
}
|
||||
|
||||
func TestTCPAddress(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, nil)
|
||||
l := NewTCP(basicConfig)
|
||||
require.Equal(t, testAddr, l.Address())
|
||||
}
|
||||
|
||||
func TestTCPProtocol(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, nil)
|
||||
l := NewTCP(basicConfig)
|
||||
require.Equal(t, "tcp", l.Protocol())
|
||||
}
|
||||
|
||||
func TestTCPProtocolTLS(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
|
||||
l := NewTCP(tlsConfig)
|
||||
_ = l.Init(logger)
|
||||
defer l.listen.Close()
|
||||
require.Equal(t, "tcp", l.Protocol())
|
||||
}
|
||||
|
||||
func TestTCPInit(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, nil)
|
||||
l := NewTCP(basicConfig)
|
||||
err := l.Init(logger)
|
||||
l.Close(MockCloser)
|
||||
require.NoError(t, err)
|
||||
|
||||
l2 := NewTCP("t2", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
l2 := NewTCP(tlsConfig)
|
||||
err = l2.Init(logger)
|
||||
l2.Close(MockCloser)
|
||||
require.NoError(t, err)
|
||||
@@ -60,7 +55,7 @@ func TestTCPInit(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestTCPServeAndClose(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, nil)
|
||||
l := NewTCP(basicConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -85,9 +80,7 @@ func TestTCPServeAndClose(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestTCPServeTLSAndClose(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
l := NewTCP(tlsConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -109,7 +102,7 @@ func TestTCPServeTLSAndClose(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestTCPEstablishThenEnd(t *testing.T) {
|
||||
l := NewTCP("t1", testAddr, nil)
|
||||
l := NewTCP(basicConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
||||
@@ -13,21 +13,25 @@ import (
|
||||
"log/slog"
|
||||
)
|
||||
|
||||
const TypeUnix = "unix"
|
||||
|
||||
// UnixSock is a listener for establishing client connections on basic UnixSock protocol.
|
||||
type UnixSock struct {
|
||||
sync.RWMutex
|
||||
id string // the internal id of the listener.
|
||||
address string // the network address to bind to.
|
||||
config Config // configuration values for the listener
|
||||
listen net.Listener // a net.Listener which will listen for new clients.
|
||||
log *slog.Logger // server logger
|
||||
end uint32 // ensure the close methods are only called once.
|
||||
}
|
||||
|
||||
// NewUnixSock initialises and returns a new UnixSock listener, listening on an address.
|
||||
func NewUnixSock(id, address string) *UnixSock {
|
||||
// NewUnixSock initializes and returns a new UnixSock listener, listening on an address.
|
||||
func NewUnixSock(config Config) *UnixSock {
|
||||
return &UnixSock{
|
||||
id: id,
|
||||
address: address,
|
||||
id: config.ID,
|
||||
address: config.Address,
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -15,41 +15,47 @@ import (
|
||||
|
||||
const testUnixAddr = "mochi.sock"
|
||||
|
||||
var (
|
||||
unixConfig = Config{ID: "t1", Address: testUnixAddr}
|
||||
)
|
||||
|
||||
func TestNewUnixSock(t *testing.T) {
|
||||
l := NewUnixSock("t1", testUnixAddr)
|
||||
l := NewUnixSock(unixConfig)
|
||||
require.Equal(t, "t1", l.id)
|
||||
require.Equal(t, testUnixAddr, l.address)
|
||||
}
|
||||
|
||||
func TestUnixSockID(t *testing.T) {
|
||||
l := NewUnixSock("t1", testUnixAddr)
|
||||
l := NewUnixSock(unixConfig)
|
||||
require.Equal(t, "t1", l.ID())
|
||||
}
|
||||
|
||||
func TestUnixSockAddress(t *testing.T) {
|
||||
l := NewUnixSock("t1", testUnixAddr)
|
||||
l := NewUnixSock(unixConfig)
|
||||
require.Equal(t, testUnixAddr, l.Address())
|
||||
}
|
||||
|
||||
func TestUnixSockProtocol(t *testing.T) {
|
||||
l := NewUnixSock("t1", testUnixAddr)
|
||||
l := NewUnixSock(unixConfig)
|
||||
require.Equal(t, "unix", l.Protocol())
|
||||
}
|
||||
|
||||
func TestUnixSockInit(t *testing.T) {
|
||||
l := NewUnixSock("t1", testUnixAddr)
|
||||
l := NewUnixSock(unixConfig)
|
||||
err := l.Init(logger)
|
||||
l.Close(MockCloser)
|
||||
require.NoError(t, err)
|
||||
|
||||
l2 := NewUnixSock("t2", testUnixAddr)
|
||||
t2Config := unixConfig
|
||||
t2Config.ID = "t2"
|
||||
l2 := NewUnixSock(t2Config)
|
||||
err = l2.Init(logger)
|
||||
l2.Close(MockCloser)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestUnixSockServeAndClose(t *testing.T) {
|
||||
l := NewUnixSock("t1", testUnixAddr)
|
||||
l := NewUnixSock(unixConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -74,7 +80,7 @@ func TestUnixSockServeAndClose(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestUnixSockEstablishThenEnd(t *testing.T) {
|
||||
l := NewUnixSock("t1", testUnixAddr)
|
||||
l := NewUnixSock(unixConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
||||
@@ -19,6 +19,8 @@ import (
|
||||
"github.com/gorilla/websocket"
|
||||
)
|
||||
|
||||
const TypeWS = "ws"
|
||||
|
||||
var (
|
||||
// ErrInvalidMessage indicates that a message payload was not valid.
|
||||
ErrInvalidMessage = errors.New("message type not binary")
|
||||
@@ -29,7 +31,7 @@ type Websocket struct { // [MQTT-4.2.0-1]
|
||||
sync.RWMutex
|
||||
id string // the internal id of the listener
|
||||
address string // the network address to bind to
|
||||
config *Config // configuration values for the listener
|
||||
config Config // configuration values for the listener
|
||||
listen *http.Server // a http server for serving websocket connections
|
||||
log *slog.Logger // server logger
|
||||
establish EstablishFn // the server's establish connection handler
|
||||
@@ -37,15 +39,11 @@ type Websocket struct { // [MQTT-4.2.0-1]
|
||||
end uint32 // ensure the close methods are only called once
|
||||
}
|
||||
|
||||
// NewWebsocket initialises and returns a new Websocket listener, listening on an address.
|
||||
func NewWebsocket(id, address string, config *Config) *Websocket {
|
||||
if config == nil {
|
||||
config = new(Config)
|
||||
}
|
||||
|
||||
// NewWebsocket initializes and returns a new Websocket listener, listening on an address.
|
||||
func NewWebsocket(config Config) *Websocket {
|
||||
return &Websocket{
|
||||
id: id,
|
||||
address: address,
|
||||
id: config.ID,
|
||||
address: config.Address,
|
||||
config: config,
|
||||
upgrader: &websocket.Upgrader{
|
||||
Subprotocols: []string{"mqtt"},
|
||||
|
||||
@@ -17,35 +17,33 @@ import (
|
||||
)
|
||||
|
||||
func TestNewWebsocket(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
require.Equal(t, "t1", l.id)
|
||||
require.Equal(t, testAddr, l.address)
|
||||
}
|
||||
|
||||
func TestWebsocketID(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
require.Equal(t, "t1", l.ID())
|
||||
}
|
||||
|
||||
func TestWebsocketAddress(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
require.Equal(t, testAddr, l.Address())
|
||||
}
|
||||
|
||||
func TestWebsocketProtocol(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
require.Equal(t, "ws", l.Protocol())
|
||||
}
|
||||
|
||||
func TestWebsocketProtocolTLS(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
l := NewWebsocket(tlsConfig)
|
||||
require.Equal(t, "wss", l.Protocol())
|
||||
}
|
||||
|
||||
func TestWebsocketInit(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
require.Nil(t, l.listen)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
@@ -53,7 +51,7 @@ func TestWebsocketInit(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestWebsocketServeAndClose(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
_ = l.Init(logger)
|
||||
|
||||
o := make(chan bool)
|
||||
@@ -74,9 +72,7 @@ func TestWebsocketServeAndClose(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestWebsocketServeTLSAndClose(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
l := NewWebsocket(tlsConfig)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -96,9 +92,9 @@ func TestWebsocketServeTLSAndClose(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestWebsocketFailedToServe(t *testing.T) {
|
||||
l := NewWebsocket("t1", "wrong_addr", &Config{
|
||||
TLSConfig: tlsConfigBasic,
|
||||
})
|
||||
config := tlsConfig
|
||||
config.Address = "wrong_addr"
|
||||
l := NewWebsocket(config)
|
||||
err := l.Init(logger)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -117,7 +113,7 @@ func TestWebsocketFailedToServe(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestWebsocketUpgrade(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
_ = l.Init(logger)
|
||||
|
||||
e := make(chan bool)
|
||||
@@ -136,7 +132,7 @@ func TestWebsocketUpgrade(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestWebsocketConnectionReads(t *testing.T) {
|
||||
l := NewWebsocket("t1", testAddr, nil)
|
||||
l := NewWebsocket(basicConfig)
|
||||
_ = l.Init(nil)
|
||||
|
||||
recv := make(chan []byte)
|
||||
|
||||
81
mempool/bufpool.go
Normal file
81
mempool/bufpool.go
Normal file
@@ -0,0 +1,81 @@
|
||||
package mempool
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var bufPool = NewBuffer(0)
|
||||
|
||||
// GetBuffer takes a Buffer from the default buffer pool
|
||||
func GetBuffer() *bytes.Buffer { return bufPool.Get() }
|
||||
|
||||
// PutBuffer returns Buffer to the default buffer pool
|
||||
func PutBuffer(x *bytes.Buffer) { bufPool.Put(x) }
|
||||
|
||||
type BufferPool interface {
|
||||
Get() *bytes.Buffer
|
||||
Put(x *bytes.Buffer)
|
||||
}
|
||||
|
||||
// NewBuffer returns a buffer pool. The max specify the max capacity of the Buffer the pool will
|
||||
// return. If the Buffer becoomes large than max, it will no longer be returned to the pool. If
|
||||
// max <= 0, no limit will be enforced.
|
||||
func NewBuffer(max int) BufferPool {
|
||||
if max > 0 {
|
||||
return newBufferWithCap(max)
|
||||
}
|
||||
|
||||
return newBuffer()
|
||||
}
|
||||
|
||||
// Buffer is a Buffer pool.
|
||||
type Buffer struct {
|
||||
pool *sync.Pool
|
||||
}
|
||||
|
||||
func newBuffer() *Buffer {
|
||||
return &Buffer{
|
||||
pool: &sync.Pool{
|
||||
New: func() any { return new(bytes.Buffer) },
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Get a Buffer from the pool.
|
||||
func (b *Buffer) Get() *bytes.Buffer {
|
||||
return b.pool.Get().(*bytes.Buffer)
|
||||
}
|
||||
|
||||
// Put the Buffer back into pool. It resets the Buffer for reuse.
|
||||
func (b *Buffer) Put(x *bytes.Buffer) {
|
||||
x.Reset()
|
||||
b.pool.Put(x)
|
||||
}
|
||||
|
||||
// BufferWithCap is a Buffer pool that
|
||||
type BufferWithCap struct {
|
||||
bp *Buffer
|
||||
max int
|
||||
}
|
||||
|
||||
func newBufferWithCap(max int) *BufferWithCap {
|
||||
return &BufferWithCap{
|
||||
bp: newBuffer(),
|
||||
max: max,
|
||||
}
|
||||
}
|
||||
|
||||
// Get a Buffer from the pool.
|
||||
func (b *BufferWithCap) Get() *bytes.Buffer {
|
||||
return b.bp.Get()
|
||||
}
|
||||
|
||||
// Put the Buffer back into the pool if the capacity doesn't exceed the limit. It resets the Buffer
|
||||
// for reuse.
|
||||
func (b *BufferWithCap) Put(x *bytes.Buffer) {
|
||||
if x.Cap() > b.max {
|
||||
return
|
||||
}
|
||||
b.bp.Put(x)
|
||||
}
|
||||
96
mempool/bufpool_test.go
Normal file
96
mempool/bufpool_test.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package mempool
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestNewBuffer(t *testing.T) {
|
||||
defer debug.SetGCPercent(debug.SetGCPercent(-1))
|
||||
bp := NewBuffer(1000)
|
||||
require.Equal(t, "*mempool.BufferWithCap", reflect.TypeOf(bp).String())
|
||||
|
||||
bp = NewBuffer(0)
|
||||
require.Equal(t, "*mempool.Buffer", reflect.TypeOf(bp).String())
|
||||
|
||||
bp = NewBuffer(-1)
|
||||
require.Equal(t, "*mempool.Buffer", reflect.TypeOf(bp).String())
|
||||
}
|
||||
|
||||
func TestBuffer(t *testing.T) {
|
||||
defer debug.SetGCPercent(debug.SetGCPercent(-1))
|
||||
Size := 101
|
||||
|
||||
bp := NewBuffer(0)
|
||||
buf := bp.Get()
|
||||
|
||||
for i := 0; i < Size; i++ {
|
||||
buf.WriteByte('a')
|
||||
}
|
||||
|
||||
bp.Put(buf)
|
||||
buf = bp.Get()
|
||||
require.Equal(t, 0, buf.Len())
|
||||
}
|
||||
|
||||
func TestBufferWithCap(t *testing.T) {
|
||||
defer debug.SetGCPercent(debug.SetGCPercent(-1))
|
||||
Size := 101
|
||||
bp := NewBuffer(100)
|
||||
buf := bp.Get()
|
||||
|
||||
for i := 0; i < Size; i++ {
|
||||
buf.WriteByte('a')
|
||||
}
|
||||
|
||||
bp.Put(buf)
|
||||
buf = bp.Get()
|
||||
require.Equal(t, 0, buf.Len())
|
||||
require.Equal(t, 0, buf.Cap())
|
||||
}
|
||||
|
||||
func BenchmarkBufferPool(b *testing.B) {
|
||||
bp := NewBuffer(0)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
b := bp.Get()
|
||||
b.WriteString("this is a test")
|
||||
bp.Put(b)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBufferPoolWithCapLarger(b *testing.B) {
|
||||
bp := NewBuffer(64 * 1024)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
b := bp.Get()
|
||||
b.WriteString("this is a test")
|
||||
bp.Put(b)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBufferPoolWithCapLesser(b *testing.B) {
|
||||
bp := NewBuffer(10)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
b := bp.Get()
|
||||
b.WriteString("this is a test")
|
||||
bp.Put(b)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBufferWithoutPool(b *testing.B) {
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
b := new(bytes.Buffer)
|
||||
b.WriteString("this is a test")
|
||||
_ = b
|
||||
}
|
||||
}
|
||||
@@ -12,6 +12,8 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/mochi-mqtt/server/v2/mempool"
|
||||
)
|
||||
|
||||
// All valid packet types and their packet identifiers.
|
||||
@@ -298,7 +300,8 @@ func (s *Subscription) decode(b byte) {
|
||||
|
||||
// ConnectEncode encodes a connect packet.
|
||||
func (pk *Packet) ConnectEncode(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.Write(encodeBytes(pk.Connect.ProtocolName))
|
||||
nb.WriteByte(pk.ProtocolVersion)
|
||||
|
||||
@@ -315,7 +318,8 @@ func (pk *Packet) ConnectEncode(buf *bytes.Buffer) error {
|
||||
nb.Write(encodeUint16(pk.Connect.Keepalive))
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
(&pk.Properties).Encode(pk.FixedHeader.Type, pk.Mods, pb, 0)
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
@@ -324,7 +328,8 @@ func (pk *Packet) ConnectEncode(buf *bytes.Buffer) error {
|
||||
|
||||
if pk.Connect.WillFlag {
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
(&pk.Connect).WillProperties.Encode(WillProperties, pk.Mods, pb, 0)
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
@@ -343,7 +348,7 @@ func (pk *Packet) ConnectEncode(buf *bytes.Buffer) error {
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -493,19 +498,22 @@ func (pk *Packet) ConnectValidate() Code {
|
||||
|
||||
// ConnackEncode encodes a Connack packet.
|
||||
func (pk *Packet) ConnackEncode(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.WriteByte(encodeBool(pk.SessionPresent))
|
||||
nb.WriteByte(pk.ReasonCode)
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len()+2) // +SessionPresent +ReasonCode
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -536,19 +544,21 @@ func (pk *Packet) ConnackDecode(buf []byte) error {
|
||||
|
||||
// DisconnectEncode encodes a Disconnect packet.
|
||||
func (pk *Packet) DisconnectEncode(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
nb.WriteByte(pk.ReasonCode)
|
||||
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len())
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -598,7 +608,8 @@ func (pk *Packet) PingrespDecode(buf []byte) error {
|
||||
|
||||
// PublishEncode encodes a Publish packet.
|
||||
func (pk *Packet) PublishEncode(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
|
||||
nb.Write(encodeString(pk.TopicName)) // [MQTT-3.3.2-1]
|
||||
|
||||
@@ -610,16 +621,16 @@ func (pk *Packet) PublishEncode(buf *bytes.Buffer) error {
|
||||
}
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len()+len(pk.Payload))
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
|
||||
nb.Write(pk.Payload)
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Remaining = nb.Len() + len(pk.Payload)
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
buf.Write(pk.Payload)
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -690,11 +701,13 @@ func (pk *Packet) PublishValidate(topicAliasMaximum uint16) Code {
|
||||
|
||||
// encodePubAckRelRecComp encodes a Puback, Pubrel, Pubrec, or Pubcomp packet.
|
||||
func (pk *Packet) encodePubAckRelRecComp(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.Write(encodeUint16(pk.PacketID))
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len())
|
||||
if pk.ReasonCode >= ErrUnspecifiedError.Code || pb.Len() > 1 {
|
||||
nb.WriteByte(pk.ReasonCode)
|
||||
@@ -707,7 +720,7 @@ func (pk *Packet) encodePubAckRelRecComp(buf *bytes.Buffer) error {
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -831,11 +844,13 @@ func (pk *Packet) ReasonCodeValid() bool {
|
||||
|
||||
// SubackEncode encodes a Suback packet.
|
||||
func (pk *Packet) SubackEncode(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.Write(encodeUint16(pk.PacketID))
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len()+len(pk.ReasonCodes))
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
@@ -844,7 +859,7 @@ func (pk *Packet) SubackEncode(buf *bytes.Buffer) error {
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -878,10 +893,12 @@ func (pk *Packet) SubscribeEncode(buf *bytes.Buffer) error {
|
||||
return ErrProtocolViolationNoPacketID
|
||||
}
|
||||
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.Write(encodeUint16(pk.PacketID))
|
||||
|
||||
xb := bytes.NewBuffer([]byte{}) // capture and write filters after length checks
|
||||
xb := mempool.GetBuffer() // capture and write filters after length checks
|
||||
defer mempool.PutBuffer(xb)
|
||||
for _, opts := range pk.Filters {
|
||||
xb.Write(encodeString(opts.Filter)) // [MQTT-3.8.3-1]
|
||||
if pk.ProtocolVersion == 5 {
|
||||
@@ -892,7 +909,8 @@ func (pk *Packet) SubscribeEncode(buf *bytes.Buffer) error {
|
||||
}
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len()+xb.Len())
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
@@ -901,7 +919,7 @@ func (pk *Packet) SubscribeEncode(buf *bytes.Buffer) error {
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -983,20 +1001,21 @@ func (pk *Packet) SubscribeValidate() Code {
|
||||
|
||||
// UnsubackEncode encodes an Unsuback packet.
|
||||
func (pk *Packet) UnsubackEncode(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.Write(encodeUint16(pk.PacketID))
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len())
|
||||
nb.Write(pb.Bytes())
|
||||
nb.Write(pk.ReasonCodes)
|
||||
}
|
||||
|
||||
nb.Write(pk.ReasonCodes)
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1031,16 +1050,19 @@ func (pk *Packet) UnsubscribeEncode(buf *bytes.Buffer) error {
|
||||
return ErrProtocolViolationNoPacketID
|
||||
}
|
||||
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.Write(encodeUint16(pk.PacketID))
|
||||
|
||||
xb := bytes.NewBuffer([]byte{}) // capture filters and write after length checks
|
||||
xb := mempool.GetBuffer() // capture filters and write after length checks
|
||||
defer mempool.PutBuffer(xb)
|
||||
for _, sub := range pk.Filters {
|
||||
xb.Write(encodeString(sub.Filter)) // [MQTT-3.10.3-1]
|
||||
}
|
||||
|
||||
if pk.ProtocolVersion == 5 {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len()+xb.Len())
|
||||
nb.Write(pb.Bytes())
|
||||
}
|
||||
@@ -1049,7 +1071,7 @@ func (pk *Packet) UnsubscribeEncode(buf *bytes.Buffer) error {
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1100,16 +1122,18 @@ func (pk *Packet) UnsubscribeValidate() Code {
|
||||
|
||||
// AuthEncode encodes an Auth packet.
|
||||
func (pk *Packet) AuthEncode(buf *bytes.Buffer) error {
|
||||
nb := bytes.NewBuffer([]byte{})
|
||||
nb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(nb)
|
||||
nb.WriteByte(pk.ReasonCode)
|
||||
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
pk.Properties.Encode(pk.FixedHeader.Type, pk.Mods, pb, nb.Len())
|
||||
nb.Write(pb.Bytes())
|
||||
|
||||
pk.FixedHeader.Remaining = nb.Len()
|
||||
pk.FixedHeader.Encode(buf)
|
||||
_, _ = nb.WriteTo(buf)
|
||||
buf.Write(nb.Bytes())
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -8,6 +8,8 @@ import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/mochi-mqtt/server/v2/mempool"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -199,7 +201,8 @@ func (p *Properties) Encode(pkt byte, mods Mods, b *bytes.Buffer, n int) {
|
||||
return
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
buf := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(buf)
|
||||
if p.canEncode(pkt, PropPayloadFormat) && p.PayloadFormatFlag {
|
||||
buf.WriteByte(PropPayloadFormat)
|
||||
buf.WriteByte(p.PayloadFormat)
|
||||
@@ -230,7 +233,7 @@ func (p *Properties) Encode(pkt byte, mods Mods, b *bytes.Buffer, n int) {
|
||||
for _, v := range p.SubscriptionIdentifier {
|
||||
if v > 0 {
|
||||
buf.WriteByte(PropSubscriptionIdentifier)
|
||||
encodeLength(&buf, int64(v))
|
||||
encodeLength(buf, int64(v))
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -321,7 +324,8 @@ func (p *Properties) Encode(pkt byte, mods Mods, b *bytes.Buffer, n int) {
|
||||
}
|
||||
|
||||
if !mods.DisallowProblemInfo && p.canEncode(pkt, PropUser) {
|
||||
pb := bytes.NewBuffer([]byte{})
|
||||
pb := mempool.GetBuffer()
|
||||
defer mempool.PutBuffer(pb)
|
||||
for _, v := range p.User {
|
||||
pb.WriteByte(PropUser)
|
||||
pb.Write(encodeString(v.Key))
|
||||
@@ -355,7 +359,7 @@ func (p *Properties) Encode(pkt byte, mods Mods, b *bytes.Buffer, n int) {
|
||||
}
|
||||
|
||||
encodeLength(b, int64(buf.Len()))
|
||||
_, _ = buf.WriteTo(b) // [MQTT-3.1.3-10]
|
||||
b.Write(buf.Bytes()) // [MQTT-3.1.3-10]
|
||||
}
|
||||
|
||||
// Decode decodes property bytes into a properties struct.
|
||||
|
||||
@@ -201,6 +201,7 @@ const (
|
||||
TDisconnect
|
||||
TDisconnectTakeover
|
||||
TDisconnectMqtt5
|
||||
TDisconnectMqtt5DisconnectWithWillMessage
|
||||
TDisconnectSecondConnect
|
||||
TDisconnectReceiveMaximum
|
||||
TDisconnectDropProperties
|
||||
@@ -3781,6 +3782,31 @@ var TPacketData = map[byte]TPacketCases{
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Case: TDisconnectMqtt5DisconnectWithWillMessage,
|
||||
Desc: "mqtt5 disconnect with will message",
|
||||
Primary: true,
|
||||
RawBytes: append([]byte{
|
||||
Disconnect << 4, 38, // fixed header
|
||||
CodeDisconnectWillMessage.Code, // Reason Code
|
||||
36, // Properties Length
|
||||
17, 0, 0, 0, 120, // Session Expiry Interval (17)
|
||||
31, 0, 28, // Reason String (31)
|
||||
}, []byte(CodeDisconnectWillMessage.Reason)...),
|
||||
Packet: &Packet{
|
||||
ProtocolVersion: 5,
|
||||
FixedHeader: FixedHeader{
|
||||
Type: Disconnect,
|
||||
Remaining: 22,
|
||||
},
|
||||
ReasonCode: CodeDisconnectWillMessage.Code,
|
||||
Properties: Properties{
|
||||
ReasonString: CodeDisconnectWillMessage.Reason,
|
||||
SessionExpiryInterval: 120,
|
||||
SessionExpiryIntervalFlag: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Case: TDisconnectSecondConnect,
|
||||
Desc: "second connect packet mqtt5",
|
||||
|
||||
281
server.go
281
server.go
@@ -14,6 +14,7 @@ import (
|
||||
"runtime"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
@@ -26,91 +27,108 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
Version = "2.4.0" // the current server version.
|
||||
Version = "2.6.6" // the current server version.
|
||||
defaultSysTopicInterval int64 = 1 // the interval between $SYS topic publishes
|
||||
LocalListener = "local"
|
||||
InlineClientId = "inline"
|
||||
)
|
||||
|
||||
var (
|
||||
// DefaultServerCapabilities defines the default features and capabilities provided by the server.
|
||||
DefaultServerCapabilities = &Capabilities{
|
||||
MaximumSessionExpiryInterval: math.MaxUint32, // maximum number of seconds to keep disconnected sessions
|
||||
MaximumMessageExpiryInterval: 60 * 60 * 24, // maximum message expiry if message expiry is 0 or over
|
||||
ReceiveMaximum: 1024, // maximum number of concurrent qos messages per client
|
||||
MaximumQos: 2, // maximum qos value available to clients
|
||||
RetainAvailable: 1, // retain messages is available
|
||||
MaximumPacketSize: 0, // no maximum packet size
|
||||
TopicAliasMaximum: math.MaxUint16, // maximum topic alias value
|
||||
WildcardSubAvailable: 1, // wildcard subscriptions are available
|
||||
SubIDAvailable: 1, // subscription identifiers are available
|
||||
SharedSubAvailable: 1, // shared subscriptions are available
|
||||
MinimumProtocolVersion: 3, // minimum supported mqtt version (3.0.0)
|
||||
MaximumClientWritesPending: 1024 * 8, // maximum number of pending message writes for a client
|
||||
}
|
||||
// Deprecated: Use NewDefaultServerCapabilities to avoid data race issue.
|
||||
DefaultServerCapabilities = NewDefaultServerCapabilities()
|
||||
|
||||
ErrListenerIDExists = errors.New("listener id already exists") // a listener with the same id already exists
|
||||
ErrConnectionClosed = errors.New("connection not open") // connection is closed
|
||||
ErrInlineClientNotEnabled = errors.New("please set Options.InlineClient=true to use this feature") // inline client is not enabled by default
|
||||
ErrOptionsUnreadable = errors.New("unable to read options from bytes")
|
||||
)
|
||||
|
||||
// Capabilities indicates the capabilities and features provided by the server.
|
||||
type Capabilities struct {
|
||||
MaximumMessageExpiryInterval int64
|
||||
MaximumClientWritesPending int32
|
||||
MaximumSessionExpiryInterval uint32
|
||||
MaximumPacketSize uint32
|
||||
maximumPacketID uint32 // unexported, used for testing only
|
||||
ReceiveMaximum uint16
|
||||
TopicAliasMaximum uint16
|
||||
SharedSubAvailable byte
|
||||
MinimumProtocolVersion byte
|
||||
Compatibilities Compatibilities
|
||||
MaximumQos byte
|
||||
RetainAvailable byte
|
||||
WildcardSubAvailable byte
|
||||
SubIDAvailable byte
|
||||
MaximumClients int64 `yaml:"maximum_clients" json:"maximum_clients"` // maximum number of connected clients
|
||||
MaximumMessageExpiryInterval int64 `yaml:"maximum_message_expiry_interval" json:"maximum_message_expiry_interval"` // maximum message expiry if message expiry is 0 or over
|
||||
MaximumClientWritesPending int32 `yaml:"maximum_client_writes_pending" json:"maximum_client_writes_pending"` // maximum number of pending message writes for a client
|
||||
MaximumSessionExpiryInterval uint32 `yaml:"maximum_session_expiry_interval" json:"maximum_session_expiry_interval"` // maximum number of seconds to keep disconnected sessions
|
||||
MaximumPacketSize uint32 `yaml:"maximum_packet_size" json:"maximum_packet_size"` // maximum packet size, no limit if 0
|
||||
maximumPacketID uint32 // unexported, used for testing only
|
||||
ReceiveMaximum uint16 `yaml:"receive_maximum" json:"receive_maximum"` // maximum number of concurrent qos messages per client
|
||||
MaximumInflight uint16 `yaml:"maximum_inflight" json:"maximum_inflight"` // maximum number of qos > 0 messages can be stored, 0(=8192)-65535
|
||||
TopicAliasMaximum uint16 `yaml:"topic_alias_maximum" json:"topic_alias_maximum"` // maximum topic alias value
|
||||
SharedSubAvailable byte `yaml:"shared_sub_available" json:"shared_sub_available"` // support of shared subscriptions
|
||||
MinimumProtocolVersion byte `yaml:"minimum_protocol_version" json:"minimum_protocol_version"` // minimum supported mqtt version
|
||||
Compatibilities Compatibilities `yaml:"compatibilities" json:"compatibilities"` // version compatibilities the server provides
|
||||
MaximumQos byte `yaml:"maximum_qos" json:"maximum_qos"` // maximum qos value available to clients
|
||||
RetainAvailable byte `yaml:"retain_available" json:"retain_available"` // support of retain messages
|
||||
WildcardSubAvailable byte `yaml:"wildcard_sub_available" json:"wildcard_sub_available"` // support of wildcard subscriptions
|
||||
SubIDAvailable byte `yaml:"sub_id_available" json:"sub_id_available"` // support of subscription identifiers
|
||||
}
|
||||
|
||||
// NewDefaultServerCapabilities defines the default features and capabilities provided by the server.
|
||||
func NewDefaultServerCapabilities() *Capabilities {
|
||||
return &Capabilities{
|
||||
MaximumClients: math.MaxInt64, // maximum number of connected clients
|
||||
MaximumMessageExpiryInterval: 60 * 60 * 24, // maximum message expiry if message expiry is 0 or over
|
||||
MaximumClientWritesPending: 1024 * 8, // maximum number of pending message writes for a client
|
||||
MaximumSessionExpiryInterval: math.MaxUint32, // maximum number of seconds to keep disconnected sessions
|
||||
MaximumPacketSize: 0, // no maximum packet size
|
||||
maximumPacketID: math.MaxUint16,
|
||||
ReceiveMaximum: 1024, // maximum number of concurrent qos messages per client
|
||||
MaximumInflight: 1024 * 8, // maximum number of qos > 0 messages can be stored
|
||||
TopicAliasMaximum: math.MaxUint16, // maximum topic alias value
|
||||
SharedSubAvailable: 1, // shared subscriptions are available
|
||||
MinimumProtocolVersion: 3, // minimum supported mqtt version (3.0.0)
|
||||
MaximumQos: 2, // maximum qos value available to clients
|
||||
RetainAvailable: 1, // retain messages is available
|
||||
WildcardSubAvailable: 1, // wildcard subscriptions are available
|
||||
SubIDAvailable: 1, // subscription identifiers are available
|
||||
}
|
||||
}
|
||||
|
||||
// Compatibilities provides flags for using compatibility modes.
|
||||
type Compatibilities struct {
|
||||
ObscureNotAuthorized bool // return unspecified errors instead of not authorized
|
||||
PassiveClientDisconnect bool // don't disconnect the client forcefully after sending disconnect packet (paho - spec violation)
|
||||
AlwaysReturnResponseInfo bool // always return response info (useful for testing)
|
||||
RestoreSysInfoOnRestart bool // restore system info from store as if server never stopped
|
||||
NoInheritedPropertiesOnAck bool // don't allow inherited user properties on ack (paho - spec violation)
|
||||
ObscureNotAuthorized bool `yaml:"obscure_not_authorized" json:"obscure_not_authorized"` // return unspecified errors instead of not authorized
|
||||
PassiveClientDisconnect bool `yaml:"passive_client_disconnect" json:"passive_client_disconnect"` // don't disconnect the client forcefully after sending disconnect packet (paho - spec violation)
|
||||
AlwaysReturnResponseInfo bool `yaml:"always_return_response_info" json:"always_return_response_info"` // always return response info (useful for testing)
|
||||
RestoreSysInfoOnRestart bool `yaml:"restore_sys_info_on_restart" json:"restore_sys_info_on_restart"` // restore system info from store as if server never stopped
|
||||
NoInheritedPropertiesOnAck bool `yaml:"no_inherited_properties_on_ack" json:"no_inherited_properties_on_ack"` // don't allow inherited user properties on ack (paho - spec violation)
|
||||
}
|
||||
|
||||
// Options contains configurable options for the server.
|
||||
type Options struct {
|
||||
// Listeners specifies any listeners which should be dynamically added on serve. Used when setting listeners by config.
|
||||
Listeners []listeners.Config `yaml:"listeners" json:"listeners"`
|
||||
|
||||
// Hooks specifies any hooks which should be dynamically added on serve. Used when setting hooks by config.
|
||||
Hooks []HookLoadConfig `yaml:"hooks" json:"hooks"`
|
||||
|
||||
// Capabilities defines the server features and behaviour. If you only wish to modify
|
||||
// several of these values, set them explicitly - e.g.
|
||||
// server.Options.Capabilities.MaximumClientWritesPending = 16 * 1024
|
||||
Capabilities *Capabilities
|
||||
Capabilities *Capabilities `yaml:"capabilities" json:"capabilities"`
|
||||
|
||||
// ClientNetWriteBufferSize specifies the size of the client *bufio.Writer write buffer.
|
||||
ClientNetWriteBufferSize int
|
||||
ClientNetWriteBufferSize int `yaml:"client_net_write_buffer_size" json:"client_net_write_buffer_size"`
|
||||
|
||||
// ClientNetReadBufferSize specifies the size of the client *bufio.Reader read buffer.
|
||||
ClientNetReadBufferSize int
|
||||
ClientNetReadBufferSize int `yaml:"client_net_read_buffer_size" json:"client_net_read_buffer_size"`
|
||||
|
||||
// Logger specifies a custom configured implementation of zerolog to override
|
||||
// Logger specifies a custom configured implementation of log/slog to override
|
||||
// the servers default logger configuration. If you wish to change the log level,
|
||||
// of the default logger, you can do so by setting
|
||||
// server := mqtt.New(nil)
|
||||
// of the default logger, you can do so by setting:
|
||||
// server := mqtt.New(nil)
|
||||
// level := new(slog.LevelVar)
|
||||
// server.Slog = slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
|
||||
// Level: level,
|
||||
// }))
|
||||
// level.Set(slog.LevelDebug)
|
||||
Logger *slog.Logger
|
||||
Logger *slog.Logger `yaml:"-" json:"-"`
|
||||
|
||||
// SysTopicResendInterval specifies the interval between $SYS topic updates in seconds.
|
||||
SysTopicResendInterval int64
|
||||
SysTopicResendInterval int64 `yaml:"sys_topic_resend_interval" json:"sys_topic_resend_interval"`
|
||||
|
||||
// Enable Inline client to allow direct subscribing and publishing from the parent codebase,
|
||||
// with negligible performance difference (disabled by default to prevent confusion in statistics).
|
||||
InlineClient bool
|
||||
InlineClient bool `yaml:"inline_client" json:"inline_client"`
|
||||
}
|
||||
|
||||
// Server is an MQTT broker server. It should be created with server.New()
|
||||
@@ -190,11 +208,15 @@ func New(opts *Options) *Server {
|
||||
// ensureDefaults ensures that the server starts with sane default values, if none are provided.
|
||||
func (o *Options) ensureDefaults() {
|
||||
if o.Capabilities == nil {
|
||||
o.Capabilities = DefaultServerCapabilities
|
||||
o.Capabilities = NewDefaultServerCapabilities()
|
||||
}
|
||||
|
||||
o.Capabilities.maximumPacketID = math.MaxUint16 // spec maximum is 65535
|
||||
|
||||
if o.Capabilities.MaximumInflight == 0 {
|
||||
o.Capabilities.MaximumInflight = 1024 * 8
|
||||
}
|
||||
|
||||
if o.SysTopicResendInterval == 0 {
|
||||
o.SysTopicResendInterval = defaultSysTopicInterval
|
||||
}
|
||||
@@ -233,8 +255,6 @@ func (s *Server) NewClient(c net.Conn, listener string, id string, inline bool)
|
||||
// By default, we don't want to restrict developer publishes,
|
||||
// but if you do, reset this after creating inline client.
|
||||
cl.State.Inflight.ResetReceiveQuota(math.MaxInt32)
|
||||
} else {
|
||||
go cl.WriteLoop() // can only write to real clients
|
||||
}
|
||||
|
||||
return cl
|
||||
@@ -252,6 +272,17 @@ func (s *Server) AddHook(hook Hook, config any) error {
|
||||
return s.hooks.Add(hook, config)
|
||||
}
|
||||
|
||||
// AddHooksFromConfig adds hooks to the server which were specified in the hooks config (usually from a config file).
|
||||
// New built-in hooks should be added to this list.
|
||||
func (s *Server) AddHooksFromConfig(hooks []HookLoadConfig) error {
|
||||
for _, h := range hooks {
|
||||
if err := s.AddHook(h.Hook, h.Config); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddListener adds a new network listener to the server, for receiving incoming client connections.
|
||||
func (s *Server) AddListener(l listeners.Listener) error {
|
||||
if _, ok := s.Listeners.Get(l.ID()); ok {
|
||||
@@ -270,12 +301,55 @@ func (s *Server) AddListener(l listeners.Listener) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddListenersFromConfig adds listeners to the server which were specified in the listeners config (usually from a config file).
|
||||
// New built-in listeners should be added to this list.
|
||||
func (s *Server) AddListenersFromConfig(configs []listeners.Config) error {
|
||||
for _, conf := range configs {
|
||||
var l listeners.Listener
|
||||
switch strings.ToLower(conf.Type) {
|
||||
case listeners.TypeTCP:
|
||||
l = listeners.NewTCP(conf)
|
||||
case listeners.TypeWS:
|
||||
l = listeners.NewWebsocket(conf)
|
||||
case listeners.TypeUnix:
|
||||
l = listeners.NewUnixSock(conf)
|
||||
case listeners.TypeHealthCheck:
|
||||
l = listeners.NewHTTPHealthCheck(conf)
|
||||
case listeners.TypeSysInfo:
|
||||
l = listeners.NewHTTPStats(conf, s.Info)
|
||||
case listeners.TypeMock:
|
||||
l = listeners.NewMockListener(conf.ID, conf.Address)
|
||||
default:
|
||||
s.Log.Error("listener type unavailable by config", "listener", conf.Type)
|
||||
continue
|
||||
}
|
||||
if err := s.AddListener(l); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Serve starts the event loops responsible for establishing client connections
|
||||
// on all attached listeners, publishing the system topics, and starting all hooks.
|
||||
func (s *Server) Serve() error {
|
||||
s.Log.Info("mochi mqtt starting", "version", Version)
|
||||
defer s.Log.Info("mochi mqtt server started")
|
||||
|
||||
if len(s.Options.Listeners) > 0 {
|
||||
err := s.AddListenersFromConfig(s.Options.Listeners)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(s.Options.Hooks) > 0 {
|
||||
err := s.AddHooksFromConfig(s.Options.Hooks)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if s.hooks.Provides(
|
||||
StoredClients,
|
||||
StoredInflightMessages,
|
||||
@@ -332,6 +406,8 @@ func (s *Server) EstablishConnection(listener string, c net.Conn) error {
|
||||
func (s *Server) attachClient(cl *Client, listener string) error {
|
||||
defer s.Listeners.ClientsWg.Done()
|
||||
s.Listeners.ClientsWg.Add(1)
|
||||
|
||||
go cl.WriteLoop()
|
||||
defer cl.Stop(nil)
|
||||
|
||||
pk, err := s.readConnectionPacket(cl)
|
||||
@@ -340,6 +416,16 @@ func (s *Server) attachClient(cl *Client, listener string) error {
|
||||
}
|
||||
|
||||
cl.ParseConnect(listener, pk)
|
||||
if atomic.LoadInt64(&s.Info.ClientsConnected) >= s.Options.Capabilities.MaximumClients {
|
||||
if cl.Properties.ProtocolVersion < 5 {
|
||||
s.SendConnack(cl, packets.ErrServerUnavailable, false, nil)
|
||||
} else {
|
||||
s.SendConnack(cl, packets.ErrServerBusy, false, nil)
|
||||
}
|
||||
|
||||
return packets.ErrServerBusy
|
||||
}
|
||||
|
||||
code := s.validateConnect(cl, pk) // [MQTT-3.1.4-1] [MQTT-3.1.4-2]
|
||||
if code != packets.CodeSuccess {
|
||||
if err := s.SendConnack(cl, code, false, nil); err != nil {
|
||||
@@ -400,7 +486,7 @@ func (s *Server) attachClient(cl *Client, listener string) error {
|
||||
s.hooks.OnDisconnect(cl, err, expire)
|
||||
|
||||
if expire && atomic.LoadUint32(&cl.State.isTakenOver) == 0 {
|
||||
cl.ClearInflights(math.MaxInt64, 0)
|
||||
cl.ClearInflights()
|
||||
s.UnsubscribeClient(cl)
|
||||
s.Clients.Delete(cl.ID) // [MQTT-4.1.0-2] ![MQTT-3.1.2-23]
|
||||
}
|
||||
@@ -474,11 +560,11 @@ func (s *Server) validateConnect(cl *Client, pk packets.Packet) packets.Code {
|
||||
// connection ID. If clean is true, the state of any previously existing client
|
||||
// session is abandoned.
|
||||
func (s *Server) inheritClientSession(pk packets.Packet, cl *Client) bool {
|
||||
if existing, ok := s.Clients.Get(pk.Connect.ClientIdentifier); ok {
|
||||
if existing, ok := s.Clients.Get(cl.ID); ok {
|
||||
_ = s.DisconnectClient(existing, packets.ErrSessionTakenOver) // [MQTT-3.1.4-3]
|
||||
if pk.Connect.Clean || (existing.Properties.Clean && existing.Properties.ProtocolVersion < 5) { // [MQTT-3.1.2-4] [MQTT-3.1.4-4]
|
||||
s.UnsubscribeClient(existing)
|
||||
existing.ClearInflights(math.MaxInt64, 0)
|
||||
existing.ClearInflights()
|
||||
atomic.StoreUint32(&existing.State.isTakenOver, 1) // only set isTakenOver after unsubscribe has occurred
|
||||
return false // [MQTT-3.2.2-3]
|
||||
}
|
||||
@@ -503,7 +589,7 @@ func (s *Server) inheritClientSession(pk packets.Packet, cl *Client) bool {
|
||||
// Clean the state of the existing client to prevent sequential take-overs
|
||||
// from increasing memory usage by inflights + subs * client-id.
|
||||
s.UnsubscribeClient(existing)
|
||||
existing.ClearInflights(math.MaxInt64, 0)
|
||||
existing.ClearInflights()
|
||||
|
||||
s.Log.Debug("session taken over", "client", cl.ID, "old_remote", existing.Net.Remote, "new_remote", cl.Net.Remote)
|
||||
|
||||
@@ -969,9 +1055,17 @@ func (s *Server) publishToClient(cl *Client, sub packets.Subscription, pk packet
|
||||
}
|
||||
|
||||
if out.FixedHeader.Qos > 0 {
|
||||
if cl.State.Inflight.Len() >= int(s.Options.Capabilities.MaximumInflight) {
|
||||
// add hook?
|
||||
atomic.AddInt64(&s.Info.InflightDropped, 1)
|
||||
s.Log.Warn("client store quota reached", "client", cl.ID, "listener", cl.Net.Listener)
|
||||
return out, packets.ErrQuotaExceeded
|
||||
}
|
||||
|
||||
i, err := cl.NextPacketID() // [MQTT-4.3.2-1] [MQTT-4.3.3-1]
|
||||
if err != nil {
|
||||
s.hooks.OnPacketIDExhausted(cl, pk)
|
||||
atomic.AddInt64(&s.Info.InflightDropped, 1)
|
||||
s.Log.Warn("packet ids exhausted", "error", err, "client", cl.ID, "listener", cl.Net.Listener)
|
||||
return out, packets.ErrQuotaExceeded
|
||||
}
|
||||
@@ -1002,8 +1096,10 @@ func (s *Server) publishToClient(cl *Client, sub packets.Subscription, pk packet
|
||||
default:
|
||||
atomic.AddInt64(&s.Info.MessagesDropped, 1)
|
||||
cl.ops.hooks.OnPublishDropped(cl, pk)
|
||||
cl.State.Inflight.Delete(out.PacketID) // packet was dropped due to irregular circumstances, so rollback inflight.
|
||||
cl.State.Inflight.IncreaseSendQuota()
|
||||
if out.FixedHeader.Qos > 0 {
|
||||
cl.State.Inflight.Delete(out.PacketID) // packet was dropped due to irregular circumstances, so rollback inflight.
|
||||
cl.State.Inflight.IncreaseSendQuota()
|
||||
}
|
||||
return out, packets.ErrPendingClientWritesExceeded
|
||||
}
|
||||
|
||||
@@ -1297,6 +1393,10 @@ func (s *Server) processDisconnect(cl *Client, pk packets.Packet) error {
|
||||
cl.Properties.Props.SessionExpiryIntervalFlag = true
|
||||
}
|
||||
|
||||
if pk.ReasonCode == packets.CodeDisconnectWillMessage.Code { // [MQTT-3.1.2.5] Non-normative comment
|
||||
return packets.CodeDisconnectWillMessage
|
||||
}
|
||||
|
||||
s.loop.willDelayed.Delete(cl.ID) // [MQTT-3.1.3-9] [MQTT-3.1.2-8]
|
||||
cl.Stop(packets.CodeDisconnect) // [MQTT-3.14.4-2]
|
||||
|
||||
@@ -1351,27 +1451,28 @@ func (s *Server) publishSysTopics() {
|
||||
atomic.StoreInt64(&s.Info.ClientsTotal, int64(s.Clients.Len()))
|
||||
atomic.StoreInt64(&s.Info.ClientsDisconnected, atomic.LoadInt64(&s.Info.ClientsTotal)-atomic.LoadInt64(&s.Info.ClientsConnected))
|
||||
|
||||
info := s.Info.Clone()
|
||||
topics := map[string]string{
|
||||
SysPrefix + "/broker/version": s.Info.Version,
|
||||
SysPrefix + "/broker/time": AtomicItoa(&s.Info.Time),
|
||||
SysPrefix + "/broker/uptime": AtomicItoa(&s.Info.Uptime),
|
||||
SysPrefix + "/broker/started": AtomicItoa(&s.Info.Started),
|
||||
SysPrefix + "/broker/load/bytes/received": AtomicItoa(&s.Info.BytesReceived),
|
||||
SysPrefix + "/broker/load/bytes/sent": AtomicItoa(&s.Info.BytesSent),
|
||||
SysPrefix + "/broker/clients/connected": AtomicItoa(&s.Info.ClientsConnected),
|
||||
SysPrefix + "/broker/clients/disconnected": AtomicItoa(&s.Info.ClientsDisconnected),
|
||||
SysPrefix + "/broker/clients/maximum": AtomicItoa(&s.Info.ClientsMaximum),
|
||||
SysPrefix + "/broker/clients/total": AtomicItoa(&s.Info.ClientsTotal),
|
||||
SysPrefix + "/broker/packets/received": AtomicItoa(&s.Info.PacketsReceived),
|
||||
SysPrefix + "/broker/packets/sent": AtomicItoa(&s.Info.PacketsSent),
|
||||
SysPrefix + "/broker/messages/received": AtomicItoa(&s.Info.MessagesReceived),
|
||||
SysPrefix + "/broker/messages/sent": AtomicItoa(&s.Info.MessagesSent),
|
||||
SysPrefix + "/broker/messages/dropped": AtomicItoa(&s.Info.MessagesDropped),
|
||||
SysPrefix + "/broker/messages/inflight": AtomicItoa(&s.Info.Inflight),
|
||||
SysPrefix + "/broker/retained": AtomicItoa(&s.Info.Retained),
|
||||
SysPrefix + "/broker/subscriptions": AtomicItoa(&s.Info.Subscriptions),
|
||||
SysPrefix + "/broker/system/memory": AtomicItoa(&s.Info.MemoryAlloc),
|
||||
SysPrefix + "/broker/system/threads": AtomicItoa(&s.Info.Threads),
|
||||
SysPrefix + "/broker/time": Int64toa(info.Time),
|
||||
SysPrefix + "/broker/uptime": Int64toa(info.Uptime),
|
||||
SysPrefix + "/broker/started": Int64toa(info.Started),
|
||||
SysPrefix + "/broker/load/bytes/received": Int64toa(info.BytesReceived),
|
||||
SysPrefix + "/broker/load/bytes/sent": Int64toa(info.BytesSent),
|
||||
SysPrefix + "/broker/clients/connected": Int64toa(info.ClientsConnected),
|
||||
SysPrefix + "/broker/clients/disconnected": Int64toa(info.ClientsDisconnected),
|
||||
SysPrefix + "/broker/clients/maximum": Int64toa(info.ClientsMaximum),
|
||||
SysPrefix + "/broker/clients/total": Int64toa(info.ClientsTotal),
|
||||
SysPrefix + "/broker/packets/received": Int64toa(info.PacketsReceived),
|
||||
SysPrefix + "/broker/packets/sent": Int64toa(info.PacketsSent),
|
||||
SysPrefix + "/broker/messages/received": Int64toa(info.MessagesReceived),
|
||||
SysPrefix + "/broker/messages/sent": Int64toa(info.MessagesSent),
|
||||
SysPrefix + "/broker/messages/dropped": Int64toa(info.MessagesDropped),
|
||||
SysPrefix + "/broker/messages/inflight": Int64toa(info.Inflight),
|
||||
SysPrefix + "/broker/retained": Int64toa(info.Retained),
|
||||
SysPrefix + "/broker/subscriptions": Int64toa(info.Subscriptions),
|
||||
SysPrefix + "/broker/system/memory": Int64toa(info.MemoryAlloc),
|
||||
SysPrefix + "/broker/system/threads": Int64toa(info.Threads),
|
||||
}
|
||||
|
||||
for topic, payload := range topics {
|
||||
@@ -1381,7 +1482,7 @@ func (s *Server) publishSysTopics() {
|
||||
s.publishToSubscribers(pk)
|
||||
}
|
||||
|
||||
s.hooks.OnSysInfoTick(s.Info)
|
||||
s.hooks.OnSysInfoTick(info)
|
||||
}
|
||||
|
||||
// Close attempts to gracefully shut down the server, all listeners, clients, and stores.
|
||||
@@ -1553,14 +1654,25 @@ func (s *Server) loadClients(v []storage.Client) {
|
||||
MaximumPacketSize: c.Properties.MaximumPacketSize,
|
||||
}
|
||||
cl.Properties.Will = Will(c.Will)
|
||||
s.Clients.Add(cl)
|
||||
|
||||
// cancel the context, update cl.State such as disconnected time and stopCause.
|
||||
cl.Stop(packets.ErrServerShuttingDown)
|
||||
|
||||
expire := (cl.Properties.ProtocolVersion == 5 && cl.Properties.Props.SessionExpiryInterval == 0) || (cl.Properties.ProtocolVersion < 5 && cl.Properties.Clean)
|
||||
s.hooks.OnDisconnect(cl, packets.ErrServerShuttingDown, expire)
|
||||
if expire {
|
||||
cl.ClearInflights()
|
||||
s.UnsubscribeClient(cl)
|
||||
} else {
|
||||
s.Clients.Add(cl)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// loadInflight restores inflight messages from the datastore.
|
||||
func (s *Server) loadInflight(v []storage.Message) {
|
||||
for _, msg := range v {
|
||||
if client, ok := s.Clients.Get(msg.Origin); ok {
|
||||
if client, ok := s.Clients.Get(msg.Client); ok {
|
||||
client.State.Inflight.Set(msg.ToPacket())
|
||||
}
|
||||
}
|
||||
@@ -1577,7 +1689,7 @@ func (s *Server) loadRetained(v []storage.Message) {
|
||||
// than their given expiry intervals.
|
||||
func (s *Server) clearExpiredClients(dt int64) {
|
||||
for id, client := range s.Clients.GetAll() {
|
||||
disconnected := atomic.LoadInt64(&client.State.disconnected)
|
||||
disconnected := client.StopTime()
|
||||
if disconnected == 0 {
|
||||
continue
|
||||
}
|
||||
@@ -1597,7 +1709,14 @@ func (s *Server) clearExpiredClients(dt int64) {
|
||||
// clearExpiredRetainedMessage deletes retained messages from topics if they have expired.
|
||||
func (s *Server) clearExpiredRetainedMessages(now int64) {
|
||||
for filter, pk := range s.Topics.Retained.GetAll() {
|
||||
if (pk.Expiry > 0 && pk.Expiry < now) || pk.Created+s.Options.Capabilities.MaximumMessageExpiryInterval < now {
|
||||
expired := pk.ProtocolVersion == 5 && pk.Expiry > 0 && pk.Expiry < now // [MQTT-3.3.2-5]
|
||||
|
||||
// If the maximum message expiry interval is set (greater than 0), and the message
|
||||
// retention period exceeds the maximum expiry, the message will be forcibly removed.
|
||||
enforced := s.Options.Capabilities.MaximumMessageExpiryInterval > 0 &&
|
||||
now-pk.Created > s.Options.Capabilities.MaximumMessageExpiryInterval
|
||||
|
||||
if expired || enforced {
|
||||
s.Topics.Retained.Delete(filter)
|
||||
s.hooks.OnRetainedExpired(filter)
|
||||
}
|
||||
@@ -1607,7 +1726,7 @@ func (s *Server) clearExpiredRetainedMessages(now int64) {
|
||||
// clearExpiredInflights deletes any inflight messages which have expired.
|
||||
func (s *Server) clearExpiredInflights(now int64) {
|
||||
for _, client := range s.Clients.GetAll() {
|
||||
if deleted := client.ClearInflights(now, s.Options.Capabilities.MaximumMessageExpiryInterval); len(deleted) > 0 {
|
||||
if deleted := client.ClearExpiredInflights(now, s.Options.Capabilities.MaximumMessageExpiryInterval); len(deleted) > 0 {
|
||||
for _, id := range deleted {
|
||||
s.hooks.OnQosDropped(client, packets.Packet{PacketID: id})
|
||||
}
|
||||
@@ -1632,7 +1751,7 @@ func (s *Server) sendDelayedLWT(dt int64) {
|
||||
}
|
||||
}
|
||||
|
||||
// AtomicItoa converts an int64 point to a string.
|
||||
func AtomicItoa(ptr *int64) string {
|
||||
return strconv.FormatInt(atomic.LoadInt64(ptr), 10)
|
||||
// Int64toa converts an int64 to a string.
|
||||
func Int64toa(v int64) string {
|
||||
return strconv.FormatInt(v, 10)
|
||||
}
|
||||
|
||||
314
server_test.go
314
server_test.go
@@ -96,24 +96,24 @@ func (h *DelayHook) OnDisconnect(cl *Client, err error, expire bool) {
|
||||
}
|
||||
|
||||
func newServer() *Server {
|
||||
cc := *DefaultServerCapabilities
|
||||
cc := NewDefaultServerCapabilities()
|
||||
cc.MaximumMessageExpiryInterval = 0
|
||||
cc.ReceiveMaximum = 0
|
||||
s := New(&Options{
|
||||
Logger: logger,
|
||||
Capabilities: &cc,
|
||||
Capabilities: cc,
|
||||
})
|
||||
_ = s.AddHook(new(AllowHook), nil)
|
||||
return s
|
||||
}
|
||||
|
||||
func newServerWithInlineClient() *Server {
|
||||
cc := *DefaultServerCapabilities
|
||||
cc := NewDefaultServerCapabilities()
|
||||
cc.MaximumMessageExpiryInterval = 0
|
||||
cc.ReceiveMaximum = 0
|
||||
s := New(&Options{
|
||||
Logger: logger,
|
||||
Capabilities: &cc,
|
||||
Capabilities: cc,
|
||||
InlineClient: true,
|
||||
})
|
||||
_ = s.AddHook(new(AllowHook), nil)
|
||||
@@ -125,7 +125,7 @@ func TestOptionsSetDefaults(t *testing.T) {
|
||||
opts.ensureDefaults()
|
||||
|
||||
require.Equal(t, defaultSysTopicInterval, opts.SysTopicResendInterval)
|
||||
require.Equal(t, DefaultServerCapabilities, opts.Capabilities)
|
||||
require.Equal(t, NewDefaultServerCapabilities(), opts.Capabilities)
|
||||
|
||||
opts = new(Options)
|
||||
opts.ensureDefaults()
|
||||
@@ -220,6 +220,34 @@ func TestServerAddListener(t *testing.T) {
|
||||
require.Equal(t, ErrListenerIDExists, err)
|
||||
}
|
||||
|
||||
func TestServerAddHooksFromConfig(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
require.NotNil(t, s)
|
||||
s.Log = logger
|
||||
|
||||
hooks := []HookLoadConfig{
|
||||
{Hook: new(modifiedHookBase)},
|
||||
}
|
||||
|
||||
err := s.AddHooksFromConfig(hooks)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestServerAddHooksFromConfigError(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
require.NotNil(t, s)
|
||||
s.Log = logger
|
||||
|
||||
hooks := []HookLoadConfig{
|
||||
{Hook: new(modifiedHookBase), Config: map[string]interface{}{}},
|
||||
}
|
||||
|
||||
err := s.AddHooksFromConfig(hooks)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestServerAddListenerInitFailure(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
@@ -232,6 +260,60 @@ func TestServerAddListenerInitFailure(t *testing.T) {
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestServerAddListenersFromConfig(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
require.NotNil(t, s)
|
||||
s.Log = logger
|
||||
|
||||
lc := []listeners.Config{
|
||||
{Type: listeners.TypeTCP, ID: "tcp", Address: ":1883"},
|
||||
{Type: listeners.TypeWS, ID: "ws", Address: ":1882"},
|
||||
{Type: listeners.TypeHealthCheck, ID: "health", Address: ":1881"},
|
||||
{Type: listeners.TypeSysInfo, ID: "info", Address: ":1880"},
|
||||
{Type: listeners.TypeUnix, ID: "unix", Address: "mochi.sock"},
|
||||
{Type: listeners.TypeMock, ID: "mock", Address: "0"},
|
||||
{Type: "unknown", ID: "unknown"},
|
||||
}
|
||||
|
||||
err := s.AddListenersFromConfig(lc)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 6, s.Listeners.Len())
|
||||
|
||||
tcp, _ := s.Listeners.Get("tcp")
|
||||
require.Equal(t, "[::]:1883", tcp.Address())
|
||||
|
||||
ws, _ := s.Listeners.Get("ws")
|
||||
require.Equal(t, ":1882", ws.Address())
|
||||
|
||||
health, _ := s.Listeners.Get("health")
|
||||
require.Equal(t, ":1881", health.Address())
|
||||
|
||||
info, _ := s.Listeners.Get("info")
|
||||
require.Equal(t, ":1880", info.Address())
|
||||
|
||||
unix, _ := s.Listeners.Get("unix")
|
||||
require.Equal(t, "mochi.sock", unix.Address())
|
||||
|
||||
mock, _ := s.Listeners.Get("mock")
|
||||
require.Equal(t, "0", mock.Address())
|
||||
}
|
||||
|
||||
func TestServerAddListenersFromConfigError(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
require.NotNil(t, s)
|
||||
s.Log = logger
|
||||
|
||||
lc := []listeners.Config{
|
||||
{Type: listeners.TypeTCP, ID: "tcp", Address: "x"},
|
||||
}
|
||||
|
||||
err := s.AddListenersFromConfig(lc)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, 0, s.Listeners.Len())
|
||||
}
|
||||
|
||||
func TestServerServe(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
@@ -253,6 +335,57 @@ func TestServerServe(t *testing.T) {
|
||||
require.Equal(t, true, listener.(*listeners.MockListener).IsServing())
|
||||
}
|
||||
|
||||
func TestServerServeFromConfig(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
require.NotNil(t, s)
|
||||
|
||||
s.Options.Listeners = []listeners.Config{
|
||||
{Type: listeners.TypeMock, ID: "mock", Address: "0"},
|
||||
}
|
||||
|
||||
s.Options.Hooks = []HookLoadConfig{
|
||||
{Hook: new(modifiedHookBase)},
|
||||
}
|
||||
|
||||
err := s.Serve()
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(time.Millisecond)
|
||||
|
||||
require.Equal(t, 1, s.Listeners.Len())
|
||||
listener, ok := s.Listeners.Get("mock")
|
||||
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, true, listener.(*listeners.MockListener).IsServing())
|
||||
}
|
||||
|
||||
func TestServerServeFromConfigListenerError(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
require.NotNil(t, s)
|
||||
|
||||
s.Options.Listeners = []listeners.Config{
|
||||
{Type: listeners.TypeTCP, ID: "tcp", Address: "x"},
|
||||
}
|
||||
|
||||
err := s.Serve()
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestServerServeFromConfigHookError(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
require.NotNil(t, s)
|
||||
|
||||
s.Options.Hooks = []HookLoadConfig{
|
||||
{Hook: new(modifiedHookBase), Config: map[string]interface{}{}},
|
||||
}
|
||||
|
||||
err := s.Serve()
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestServerServeReadStoreFailure(t *testing.T) {
|
||||
s := newServer()
|
||||
defer s.Close()
|
||||
@@ -811,6 +944,41 @@ func TestServerEstablishConnectionInvalidConnect(t *testing.T) {
|
||||
_ = r.Close()
|
||||
}
|
||||
|
||||
func TestEstablishConnectionMaximumClientsReached(t *testing.T) {
|
||||
cc := NewDefaultServerCapabilities()
|
||||
cc.MaximumClients = 0
|
||||
s := New(&Options{
|
||||
Logger: logger,
|
||||
Capabilities: cc,
|
||||
})
|
||||
_ = s.AddHook(new(AllowHook), nil)
|
||||
defer s.Close()
|
||||
|
||||
r, w := net.Pipe()
|
||||
o := make(chan error)
|
||||
go func() {
|
||||
o <- s.EstablishConnection("tcp", r)
|
||||
}()
|
||||
|
||||
go func() {
|
||||
_, _ = w.Write(packets.TPacketData[packets.Connect].Get(packets.TConnectClean).RawBytes)
|
||||
}()
|
||||
|
||||
// receive the connack
|
||||
recv := make(chan []byte)
|
||||
go func() {
|
||||
buf, err := io.ReadAll(w)
|
||||
require.NoError(t, err)
|
||||
recv <- buf
|
||||
}()
|
||||
|
||||
err := <-o
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, packets.ErrServerBusy)
|
||||
|
||||
_ = r.Close()
|
||||
}
|
||||
|
||||
// See https://github.com/mochi-mqtt/server/issues/178
|
||||
func TestServerEstablishConnectionZeroByteUsernameIsValid(t *testing.T) {
|
||||
s := newServer()
|
||||
@@ -1529,10 +1697,10 @@ func TestServerProcessPublishACLCheckDeny(t *testing.T) {
|
||||
|
||||
for _, tx := range tt {
|
||||
t.Run(tx.name, func(t *testing.T) {
|
||||
cc := *DefaultServerCapabilities
|
||||
cc := NewDefaultServerCapabilities()
|
||||
s := New(&Options{
|
||||
Logger: logger,
|
||||
Capabilities: &cc,
|
||||
Capabilities: cc,
|
||||
})
|
||||
_ = s.AddHook(new(DenyHook), nil)
|
||||
_ = s.Serve()
|
||||
@@ -1907,6 +2075,7 @@ func TestPublishToClientSubscriptionDowngradeQos(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPublishToClientExceedClientWritesPending(t *testing.T) {
|
||||
var sendQuota uint16 = 5
|
||||
s := newServer()
|
||||
|
||||
_, w := net.Pipe()
|
||||
@@ -1917,9 +2086,12 @@ func TestPublishToClientExceedClientWritesPending(t *testing.T) {
|
||||
options: &Options{
|
||||
Capabilities: &Capabilities{
|
||||
MaximumClientWritesPending: 3,
|
||||
maximumPacketID: 10,
|
||||
},
|
||||
},
|
||||
})
|
||||
cl.Properties.Props.ReceiveMaximum = sendQuota
|
||||
cl.State.Inflight.ResetSendQuota(int32(cl.Properties.Props.ReceiveMaximum))
|
||||
|
||||
s.Clients.Add(cl)
|
||||
|
||||
@@ -1928,9 +2100,20 @@ func TestPublishToClientExceedClientWritesPending(t *testing.T) {
|
||||
atomic.AddInt32(&cl.State.outboundQty, 1)
|
||||
}
|
||||
|
||||
id, _ := cl.NextPacketID()
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: uint16(id)})
|
||||
cl.State.Inflight.DecreaseSendQuota()
|
||||
sendQuota--
|
||||
|
||||
_, err := s.publishToClient(cl, packets.Subscription{Filter: "a/b/c", Qos: 2}, packets.Packet{})
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, packets.ErrPendingClientWritesExceeded, err)
|
||||
require.Equal(t, int32(sendQuota), atomic.LoadInt32(&cl.State.Inflight.sendQuota))
|
||||
|
||||
_, err = s.publishToClient(cl, packets.Subscription{Filter: "a/b/c", Qos: 2}, packets.Packet{FixedHeader: packets.FixedHeader{Qos: 1}})
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, packets.ErrPendingClientWritesExceeded, err)
|
||||
require.Equal(t, int32(sendQuota), atomic.LoadInt32(&cl.State.Inflight.sendQuota))
|
||||
}
|
||||
|
||||
func TestPublishToClientServerTopicAlias(t *testing.T) {
|
||||
@@ -1986,6 +2169,22 @@ func TestPublishToClientMqtt5RetainAsPublishedTrueLeverageNoConn(t *testing.T) {
|
||||
require.ErrorIs(t, err, packets.CodeDisconnect)
|
||||
}
|
||||
|
||||
func TestPublishToClientExceedMaximumInflight(t *testing.T) {
|
||||
const MaxInflight uint16 = 5
|
||||
s := newServer()
|
||||
cl, _, _ := newTestClient()
|
||||
s.Options.Capabilities.MaximumInflight = MaxInflight
|
||||
cl.ops.options.Capabilities.MaximumInflight = MaxInflight
|
||||
for i := uint16(0); i < MaxInflight; i++ {
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: i})
|
||||
}
|
||||
|
||||
_, err := s.publishToClient(cl, packets.Subscription{Filter: "a/b/c", Qos: 1}, *packets.TPacketData[packets.Publish].Get(packets.TPublishQos1).Packet)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, packets.ErrQuotaExceeded)
|
||||
require.Equal(t, int64(1), atomic.LoadInt64(&s.Info.InflightDropped))
|
||||
}
|
||||
|
||||
func TestPublishToClientExhaustedPacketID(t *testing.T) {
|
||||
s := newServer()
|
||||
cl, _, _ := newTestClient()
|
||||
@@ -1996,6 +2195,7 @@ func TestPublishToClientExhaustedPacketID(t *testing.T) {
|
||||
_, err := s.publishToClient(cl, packets.Subscription{Filter: "a/b/c", Qos: 1}, *packets.TPacketData[packets.Publish].Get(packets.TPublishQos1).Packet)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, packets.ErrQuotaExceeded)
|
||||
require.Equal(t, int64(1), atomic.LoadInt64(&s.Info.InflightDropped))
|
||||
}
|
||||
|
||||
func TestPublishToClientACLNotAuthorized(t *testing.T) {
|
||||
@@ -2058,7 +2258,7 @@ func TestPublishToSubscribersExhaustedSendQuota(t *testing.T) {
|
||||
require.True(t, subbed)
|
||||
|
||||
// coverage: subscriber publish errors are non-returnable
|
||||
// can we hook into zerolog ?
|
||||
// can we hook into log/slog ?
|
||||
_ = r.Close()
|
||||
pkx := *packets.TPacketData[packets.Publish].Get(packets.TPublishQos1).Packet
|
||||
pkx.PacketID = 0
|
||||
@@ -2079,7 +2279,7 @@ func TestPublishToSubscribersExhaustedPacketIDs(t *testing.T) {
|
||||
require.True(t, subbed)
|
||||
|
||||
// coverage: subscriber publish errors are non-returnable
|
||||
// can we hook into zerolog ?
|
||||
// can we hook into log/slog ?
|
||||
_ = r.Close()
|
||||
pkx := *packets.TPacketData[packets.Publish].Get(packets.TPublishQos1).Packet
|
||||
pkx.PacketID = 0
|
||||
@@ -2096,7 +2296,7 @@ func TestPublishToSubscribersNoConnection(t *testing.T) {
|
||||
require.True(t, subbed)
|
||||
|
||||
// coverage: subscriber publish errors are non-returnable
|
||||
// can we hook into zerolog ?
|
||||
// can we hook into log/slog ?
|
||||
_ = r.Close()
|
||||
s.publishToSubscribers(*packets.TPacketData[packets.Publish].Get(packets.TPublishBasic).Packet)
|
||||
time.Sleep(time.Millisecond)
|
||||
@@ -2938,6 +3138,22 @@ func TestServerProcessPacketDisconnectNonZeroExpiryViolation(t *testing.T) {
|
||||
require.ErrorIs(t, err, packets.ErrProtocolViolationZeroNonZeroExpiry)
|
||||
}
|
||||
|
||||
func TestServerProcessPacketDisconnectDisconnectWithWillMessage(t *testing.T) {
|
||||
s := newServer()
|
||||
cl, _, _ := newTestClient()
|
||||
cl.Properties.Props.SessionExpiryInterval = 30
|
||||
cl.Properties.ProtocolVersion = 5
|
||||
|
||||
s.loop.willDelayed.Add(cl.ID, packets.Packet{TopicName: "a/b/c", Payload: []byte("hello")})
|
||||
require.Equal(t, 1, s.loop.willDelayed.Len())
|
||||
|
||||
err := s.processPacket(cl, *packets.TPacketData[packets.Disconnect].Get(packets.TDisconnectMqtt5DisconnectWithWillMessage).Packet)
|
||||
require.Error(t, err)
|
||||
|
||||
require.Equal(t, 1, s.loop.willDelayed.Len())
|
||||
require.False(t, cl.Closed())
|
||||
}
|
||||
|
||||
func TestServerProcessPacketAuth(t *testing.T) {
|
||||
s := newServer()
|
||||
cl, r, w := newTestClient()
|
||||
@@ -3128,15 +3344,50 @@ func TestServerLoadClients(t *testing.T) {
|
||||
{ID: "mochi"},
|
||||
{ID: "zen"},
|
||||
{ID: "mochi-co"},
|
||||
{ID: "v3-clean", ProtocolVersion: 4, Clean: true},
|
||||
{ID: "v3-not-clean", ProtocolVersion: 4, Clean: false},
|
||||
{
|
||||
ID: "v5-clean",
|
||||
ProtocolVersion: 5,
|
||||
Clean: true,
|
||||
Properties: storage.ClientProperties{
|
||||
SessionExpiryInterval: 10,
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "v5-expire-interval-0",
|
||||
ProtocolVersion: 5,
|
||||
Properties: storage.ClientProperties{
|
||||
SessionExpiryInterval: 0,
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "v5-expire-interval-not-0",
|
||||
ProtocolVersion: 5,
|
||||
Properties: storage.ClientProperties{
|
||||
SessionExpiryInterval: 10,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
s := newServer()
|
||||
require.Equal(t, 0, s.Clients.Len())
|
||||
s.loadClients(v)
|
||||
require.Equal(t, 3, s.Clients.Len())
|
||||
require.Equal(t, 6, s.Clients.Len())
|
||||
cl, ok := s.Clients.Get("mochi")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "mochi", cl.ID)
|
||||
|
||||
_, ok = s.Clients.Get("v3-clean")
|
||||
require.False(t, ok)
|
||||
_, ok = s.Clients.Get("v3-not-clean")
|
||||
require.True(t, ok)
|
||||
_, ok = s.Clients.Get("v5-clean")
|
||||
require.True(t, ok)
|
||||
_, ok = s.Clients.Get("v5-expire-interval-0")
|
||||
require.False(t, ok)
|
||||
_, ok = s.Clients.Get("v5-expire-interval-not-0")
|
||||
require.True(t, ok)
|
||||
}
|
||||
|
||||
func TestServerLoadSubscriptions(t *testing.T) {
|
||||
@@ -3165,10 +3416,10 @@ func TestServerLoadInflightMessages(t *testing.T) {
|
||||
require.Equal(t, 3, s.Clients.Len())
|
||||
|
||||
v := []storage.Message{
|
||||
{Origin: "mochi", PacketID: 1, Payload: []byte("hello world"), TopicName: "a/b/c"},
|
||||
{Origin: "mochi", PacketID: 2, Payload: []byte("yes"), TopicName: "a/b/c"},
|
||||
{Origin: "zen", PacketID: 3, Payload: []byte("hello world"), TopicName: "a/b/c"},
|
||||
{Origin: "mochi-co", PacketID: 4, Payload: []byte("hello world"), TopicName: "a/b/c"},
|
||||
{Client: "mochi", Origin: "mochi", PacketID: 1, Payload: []byte("hello world"), TopicName: "a/b/c"},
|
||||
{Client: "mochi", Origin: "mochi", PacketID: 2, Payload: []byte("yes"), TopicName: "a/b/c"},
|
||||
{Client: "zen", Origin: "zen", PacketID: 3, Payload: []byte("hello world"), TopicName: "a/b/c"},
|
||||
{Client: "mochi-co", Origin: "mochi-co", PacketID: 4, Payload: []byte("hello world"), TopicName: "a/b/c"},
|
||||
}
|
||||
s.loadInflight(v)
|
||||
|
||||
@@ -3259,6 +3510,11 @@ func TestServerClearExpiredInflights(t *testing.T) {
|
||||
s.clearExpiredInflights(n)
|
||||
require.Len(t, cl.State.Inflight.GetAll(false), 2)
|
||||
require.Equal(t, int64(-3), s.Info.Inflight)
|
||||
|
||||
s.Options.Capabilities.MaximumMessageExpiryInterval = 0
|
||||
cl.State.Inflight.Set(packets.Packet{PacketID: 8, Expiry: n - 8})
|
||||
s.clearExpiredInflights(n)
|
||||
require.Len(t, cl.State.Inflight.GetAll(false), 3)
|
||||
}
|
||||
|
||||
func TestServerClearExpiredRetained(t *testing.T) {
|
||||
@@ -3267,15 +3523,28 @@ func TestServerClearExpiredRetained(t *testing.T) {
|
||||
s.Options.Capabilities.MaximumMessageExpiryInterval = 4
|
||||
|
||||
n := time.Now().Unix()
|
||||
s.Topics.Retained.Add("a/b/c", packets.Packet{Created: n, Expiry: n - 1})
|
||||
s.Topics.Retained.Add("d/e/f", packets.Packet{Created: n, Expiry: n - 2})
|
||||
s.Topics.Retained.Add("g/h/i", packets.Packet{Created: n - 3}) // within bounds
|
||||
s.Topics.Retained.Add("j/k/l", packets.Packet{Created: n - 5}) // over max server expiry limit
|
||||
s.Topics.Retained.Add("m/n/o", packets.Packet{Created: n})
|
||||
s.Topics.Retained.Add("a/b/c", packets.Packet{ProtocolVersion: 5, Created: n, Expiry: n - 1})
|
||||
s.Topics.Retained.Add("d/e/f", packets.Packet{ProtocolVersion: 5, Created: n, Expiry: n - 2})
|
||||
s.Topics.Retained.Add("g/h/i", packets.Packet{ProtocolVersion: 5, Created: n - 3}) // within bounds
|
||||
s.Topics.Retained.Add("j/k/l", packets.Packet{ProtocolVersion: 5, Created: n - 5}) // over max server expiry limit
|
||||
s.Topics.Retained.Add("m/n/o", packets.Packet{ProtocolVersion: 5, Created: n})
|
||||
|
||||
require.Len(t, s.Topics.Retained.GetAll(), 5)
|
||||
s.clearExpiredRetainedMessages(n)
|
||||
require.Len(t, s.Topics.Retained.GetAll(), 2)
|
||||
|
||||
s.Topics.Retained.Add("p/q/r", packets.Packet{Created: n, Expiry: n - 1})
|
||||
s.Topics.Retained.Add("s/t/u", packets.Packet{Created: n, Expiry: n - 2}) // expiry is ineffective for v3.
|
||||
s.Topics.Retained.Add("v/w/x", packets.Packet{Created: n - 3}) // within bounds for v3
|
||||
s.Topics.Retained.Add("y/z/1", packets.Packet{Created: n - 5}) // over max server expiry limit
|
||||
require.Len(t, s.Topics.Retained.GetAll(), 6)
|
||||
s.clearExpiredRetainedMessages(n)
|
||||
require.Len(t, s.Topics.Retained.GetAll(), 5)
|
||||
|
||||
s.Options.Capabilities.MaximumMessageExpiryInterval = 0
|
||||
s.Topics.Retained.Add("2/3/4", packets.Packet{Created: n - 8})
|
||||
s.clearExpiredRetainedMessages(n)
|
||||
require.Len(t, s.Topics.Retained.GetAll(), 6)
|
||||
}
|
||||
|
||||
func TestServerClearExpiredClients(t *testing.T) {
|
||||
@@ -3335,10 +3604,9 @@ func TestLoadServerInfoRestoreOnRestart(t *testing.T) {
|
||||
require.Equal(t, int64(60), s.Info.BytesReceived)
|
||||
}
|
||||
|
||||
func TestAtomicItoa(t *testing.T) {
|
||||
func TestItoa(t *testing.T) {
|
||||
i := int64(22)
|
||||
ip := &i
|
||||
require.Equal(t, "22", AtomicItoa(ip))
|
||||
require.Equal(t, "22", Int64toa(i))
|
||||
}
|
||||
|
||||
func TestServerSubscribe(t *testing.T) {
|
||||
|
||||
@@ -514,7 +514,7 @@ func (x *TopicsIndex) seek(filter string, d int) *particle {
|
||||
|
||||
// trim removes empty filter particles from the index.
|
||||
func (x *TopicsIndex) trim(n *particle) {
|
||||
for n.parent != nil && n.retainPath == "" && n.particles.len()+n.subscriptions.Len()+n.shared.Len() == 0 {
|
||||
for n.parent != nil && n.retainPath == "" && n.particles.len()+n.subscriptions.Len()+n.shared.Len()+n.inlineSubscriptions.Len() == 0 {
|
||||
key := n.key
|
||||
n = n.parent
|
||||
n.particles.delete(key)
|
||||
|
||||
1
vendor/github.com/AndreasBriese/bbloom/.travis.yml
generated
vendored
1
vendor/github.com/AndreasBriese/bbloom/.travis.yml
generated
vendored
@@ -1 +0,0 @@
|
||||
language: go
|
||||
35
vendor/github.com/AndreasBriese/bbloom/LICENSE
generated
vendored
35
vendor/github.com/AndreasBriese/bbloom/LICENSE
generated
vendored
@@ -1,35 +0,0 @@
|
||||
bbloom.go
|
||||
|
||||
// The MIT License (MIT)
|
||||
// Copyright (c) 2014 Andreas Briese, eduToolbox@Bri-C GmbH, Sarstedt
|
||||
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
// this software and associated documentation files (the "Software"), to deal in
|
||||
// the Software without restriction, including without limitation the rights to
|
||||
// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
// the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
// subject to the following conditions:
|
||||
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
// FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
// COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
// IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
// CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
siphash.go
|
||||
|
||||
// https://github.com/dchest/siphash
|
||||
//
|
||||
// Written in 2012 by Dmitry Chestnykh.
|
||||
//
|
||||
// To the extent possible under law, the author have dedicated all copyright
|
||||
// and related and neighboring rights to this software to the public domain
|
||||
// worldwide. This software is distributed without any warranty.
|
||||
// http://creativecommons.org/publicdomain/zero/1.0/
|
||||
//
|
||||
// Package siphash implements SipHash-2-4, a fast short-input PRF
|
||||
// created by Jean-Philippe Aumasson and Daniel J. Bernstein.
|
||||
131
vendor/github.com/AndreasBriese/bbloom/README.md
generated
vendored
131
vendor/github.com/AndreasBriese/bbloom/README.md
generated
vendored
@@ -1,131 +0,0 @@
|
||||
## bbloom: a bitset Bloom filter for go/golang
|
||||
===
|
||||
|
||||
[](http://travis-ci.org/AndreasBriese/bbloom)
|
||||
|
||||
package implements a fast bloom filter with real 'bitset' and JSONMarshal/JSONUnmarshal to store/reload the Bloom filter.
|
||||
|
||||
NOTE: the package uses unsafe.Pointer to set and read the bits from the bitset. If you're uncomfortable with using the unsafe package, please consider using my bloom filter package at github.com/AndreasBriese/bloom
|
||||
|
||||
===
|
||||
|
||||
changelog 11/2015: new thread safe methods AddTS(), HasTS(), AddIfNotHasTS() following a suggestion from Srdjan Marinovic (github @a-little-srdjan), who used this to code a bloomfilter cache.
|
||||
|
||||
This bloom filter was developed to strengthen a website-log database and was tested and optimized for this log-entry mask: "2014/%02i/%02i %02i:%02i:%02i /info.html".
|
||||
Nonetheless bbloom should work with any other form of entries.
|
||||
|
||||
~~Hash function is a modified Berkeley DB sdbm hash (to optimize for smaller strings). sdbm http://www.cse.yorku.ca/~oz/hash.html~~
|
||||
|
||||
Found sipHash (SipHash-2-4, a fast short-input PRF created by Jean-Philippe Aumasson and Daniel J. Bernstein.) to be about as fast. sipHash had been ported by Dimtry Chestnyk to Go (github.com/dchest/siphash )
|
||||
|
||||
Minimum hashset size is: 512 ([4]uint64; will be set automatically).
|
||||
|
||||
###install
|
||||
|
||||
```sh
|
||||
go get github.com/AndreasBriese/bbloom
|
||||
```
|
||||
|
||||
###test
|
||||
+ change to folder ../bbloom
|
||||
+ create wordlist in file "words.txt" (you might use `python permut.py`)
|
||||
+ run 'go test -bench=.' within the folder
|
||||
|
||||
```go
|
||||
go test -bench=.
|
||||
```
|
||||
|
||||
~~If you've installed the GOCONVEY TDD-framework http://goconvey.co/ you can run the tests automatically.~~
|
||||
|
||||
using go's testing framework now (have in mind that the op timing is related to 65536 operations of Add, Has, AddIfNotHas respectively)
|
||||
|
||||
### usage
|
||||
|
||||
after installation add
|
||||
|
||||
```go
|
||||
import (
|
||||
...
|
||||
"github.com/AndreasBriese/bbloom"
|
||||
...
|
||||
)
|
||||
```
|
||||
|
||||
at your header. In the program use
|
||||
|
||||
```go
|
||||
// create a bloom filter for 65536 items and 1 % wrong-positive ratio
|
||||
bf := bbloom.New(float64(1<<16), float64(0.01))
|
||||
|
||||
// or
|
||||
// create a bloom filter with 650000 for 65536 items and 7 locs per hash explicitly
|
||||
// bf = bbloom.New(float64(650000), float64(7))
|
||||
// or
|
||||
bf = bbloom.New(650000.0, 7.0)
|
||||
|
||||
// add one item
|
||||
bf.Add([]byte("butter"))
|
||||
|
||||
// Number of elements added is exposed now
|
||||
// Note: ElemNum will not be included in JSON export (for compatability to older version)
|
||||
nOfElementsInFilter := bf.ElemNum
|
||||
|
||||
// check if item is in the filter
|
||||
isIn := bf.Has([]byte("butter")) // should be true
|
||||
isNotIn := bf.Has([]byte("Butter")) // should be false
|
||||
|
||||
// 'add only if item is new' to the bloomfilter
|
||||
added := bf.AddIfNotHas([]byte("butter")) // should be false because 'butter' is already in the set
|
||||
added = bf.AddIfNotHas([]byte("buTTer")) // should be true because 'buTTer' is new
|
||||
|
||||
// thread safe versions for concurrent use: AddTS, HasTS, AddIfNotHasTS
|
||||
// add one item
|
||||
bf.AddTS([]byte("peanutbutter"))
|
||||
// check if item is in the filter
|
||||
isIn = bf.HasTS([]byte("peanutbutter")) // should be true
|
||||
isNotIn = bf.HasTS([]byte("peanutButter")) // should be false
|
||||
// 'add only if item is new' to the bloomfilter
|
||||
added = bf.AddIfNotHasTS([]byte("butter")) // should be false because 'peanutbutter' is already in the set
|
||||
added = bf.AddIfNotHasTS([]byte("peanutbuTTer")) // should be true because 'penutbuTTer' is new
|
||||
|
||||
// convert to JSON ([]byte)
|
||||
Json := bf.JSONMarshal()
|
||||
|
||||
// bloomfilters Mutex is exposed for external un-/locking
|
||||
// i.e. mutex lock while doing JSON conversion
|
||||
bf.Mtx.Lock()
|
||||
Json = bf.JSONMarshal()
|
||||
bf.Mtx.Unlock()
|
||||
|
||||
// restore a bloom filter from storage
|
||||
bfNew := bbloom.JSONUnmarshal(Json)
|
||||
|
||||
isInNew := bfNew.Has([]byte("butter")) // should be true
|
||||
isNotInNew := bfNew.Has([]byte("Butter")) // should be false
|
||||
|
||||
```
|
||||
|
||||
to work with the bloom filter.
|
||||
|
||||
### why 'fast'?
|
||||
|
||||
It's about 3 times faster than William Fitzgeralds bitset bloom filter https://github.com/willf/bloom . And it is about so fast as my []bool set variant for Boom filters (see https://github.com/AndreasBriese/bloom ) but having a 8times smaller memory footprint:
|
||||
|
||||
|
||||
Bloom filter (filter size 524288, 7 hashlocs)
|
||||
github.com/AndreasBriese/bbloom 'Add' 65536 items (10 repetitions): 6595800 ns (100 ns/op)
|
||||
github.com/AndreasBriese/bbloom 'Has' 65536 items (10 repetitions): 5986600 ns (91 ns/op)
|
||||
github.com/AndreasBriese/bloom 'Add' 65536 items (10 repetitions): 6304684 ns (96 ns/op)
|
||||
github.com/AndreasBriese/bloom 'Has' 65536 items (10 repetitions): 6568663 ns (100 ns/op)
|
||||
|
||||
github.com/willf/bloom 'Add' 65536 items (10 repetitions): 24367224 ns (371 ns/op)
|
||||
github.com/willf/bloom 'Test' 65536 items (10 repetitions): 21881142 ns (333 ns/op)
|
||||
github.com/dataence/bloom/standard 'Add' 65536 items (10 repetitions): 23041644 ns (351 ns/op)
|
||||
github.com/dataence/bloom/standard 'Check' 65536 items (10 repetitions): 19153133 ns (292 ns/op)
|
||||
github.com/cabello/bloom 'Add' 65536 items (10 repetitions): 131921507 ns (2012 ns/op)
|
||||
github.com/cabello/bloom 'Contains' 65536 items (10 repetitions): 131108962 ns (2000 ns/op)
|
||||
|
||||
(on MBPro15 OSX10.8.5 i7 4Core 2.4Ghz)
|
||||
|
||||
|
||||
With 32bit bloom filters (bloom32) using modified sdbm, bloom32 does hashing with only 2 bit shifts, one xor and one substraction per byte. smdb is about as fast as fnv64a but gives less collisions with the dataset (see mask above). bloom.New(float64(10 * 1<<16),float64(7)) populated with 1<<16 random items from the dataset (see above) and tested against the rest results in less than 0.05% collisions.
|
||||
284
vendor/github.com/AndreasBriese/bbloom/bbloom.go
generated
vendored
284
vendor/github.com/AndreasBriese/bbloom/bbloom.go
generated
vendored
@@ -1,284 +0,0 @@
|
||||
// The MIT License (MIT)
|
||||
// Copyright (c) 2014 Andreas Briese, eduToolbox@Bri-C GmbH, Sarstedt
|
||||
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
// this software and associated documentation files (the "Software"), to deal in
|
||||
// the Software without restriction, including without limitation the rights to
|
||||
// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
// the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
// subject to the following conditions:
|
||||
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
// FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
// COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
// IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
// CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
// 2019/08/25 code revision to reduce unsafe use
|
||||
// Parts are adopted from the fork at ipfs/bbloom after performance rev by
|
||||
// Steve Allen (https://github.com/Stebalien)
|
||||
// (see https://github.com/ipfs/bbloom/blob/master/bbloom.go)
|
||||
// -> func Has
|
||||
// -> func set
|
||||
// -> func add
|
||||
|
||||
package bbloom
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"math"
|
||||
"sync"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// helper
|
||||
// not needed anymore by Set
|
||||
// var mask = []uint8{1, 2, 4, 8, 16, 32, 64, 128}
|
||||
|
||||
func getSize(ui64 uint64) (size uint64, exponent uint64) {
|
||||
if ui64 < uint64(512) {
|
||||
ui64 = uint64(512)
|
||||
}
|
||||
size = uint64(1)
|
||||
for size < ui64 {
|
||||
size <<= 1
|
||||
exponent++
|
||||
}
|
||||
return size, exponent
|
||||
}
|
||||
|
||||
func calcSizeByWrongPositives(numEntries, wrongs float64) (uint64, uint64) {
|
||||
size := -1 * numEntries * math.Log(wrongs) / math.Pow(float64(0.69314718056), 2)
|
||||
locs := math.Ceil(float64(0.69314718056) * size / numEntries)
|
||||
return uint64(size), uint64(locs)
|
||||
}
|
||||
|
||||
// New
|
||||
// returns a new bloomfilter
|
||||
func New(params ...float64) (bloomfilter Bloom) {
|
||||
var entries, locs uint64
|
||||
if len(params) == 2 {
|
||||
if params[1] < 1 {
|
||||
entries, locs = calcSizeByWrongPositives(params[0], params[1])
|
||||
} else {
|
||||
entries, locs = uint64(params[0]), uint64(params[1])
|
||||
}
|
||||
} else {
|
||||
log.Fatal("usage: New(float64(number_of_entries), float64(number_of_hashlocations)) i.e. New(float64(1000), float64(3)) or New(float64(number_of_entries), float64(number_of_hashlocations)) i.e. New(float64(1000), float64(0.03))")
|
||||
}
|
||||
size, exponent := getSize(uint64(entries))
|
||||
bloomfilter = Bloom{
|
||||
Mtx: &sync.Mutex{},
|
||||
sizeExp: exponent,
|
||||
size: size - 1,
|
||||
setLocs: locs,
|
||||
shift: 64 - exponent,
|
||||
}
|
||||
bloomfilter.Size(size)
|
||||
return bloomfilter
|
||||
}
|
||||
|
||||
// NewWithBoolset
|
||||
// takes a []byte slice and number of locs per entry
|
||||
// returns the bloomfilter with a bitset populated according to the input []byte
|
||||
func NewWithBoolset(bs *[]byte, locs uint64) (bloomfilter Bloom) {
|
||||
bloomfilter = New(float64(len(*bs)<<3), float64(locs))
|
||||
for i, b := range *bs {
|
||||
*(*uint8)(unsafe.Pointer(uintptr(unsafe.Pointer(&bloomfilter.bitset[0])) + uintptr(i))) = b
|
||||
}
|
||||
return bloomfilter
|
||||
}
|
||||
|
||||
// bloomJSONImExport
|
||||
// Im/Export structure used by JSONMarshal / JSONUnmarshal
|
||||
type bloomJSONImExport struct {
|
||||
FilterSet []byte
|
||||
SetLocs uint64
|
||||
}
|
||||
|
||||
// JSONUnmarshal
|
||||
// takes JSON-Object (type bloomJSONImExport) as []bytes
|
||||
// returns Bloom object
|
||||
func JSONUnmarshal(dbData []byte) Bloom {
|
||||
bloomImEx := bloomJSONImExport{}
|
||||
json.Unmarshal(dbData, &bloomImEx)
|
||||
buf := bytes.NewBuffer(bloomImEx.FilterSet)
|
||||
bs := buf.Bytes()
|
||||
bf := NewWithBoolset(&bs, bloomImEx.SetLocs)
|
||||
return bf
|
||||
}
|
||||
|
||||
//
|
||||
// Bloom filter
|
||||
type Bloom struct {
|
||||
Mtx *sync.Mutex
|
||||
ElemNum uint64
|
||||
bitset []uint64
|
||||
sizeExp uint64
|
||||
size uint64
|
||||
setLocs uint64
|
||||
shift uint64
|
||||
}
|
||||
|
||||
// <--- http://www.cse.yorku.ca/~oz/hash.html
|
||||
// modified Berkeley DB Hash (32bit)
|
||||
// hash is casted to l, h = 16bit fragments
|
||||
// func (bl Bloom) absdbm(b *[]byte) (l, h uint64) {
|
||||
// hash := uint64(len(*b))
|
||||
// for _, c := range *b {
|
||||
// hash = uint64(c) + (hash << 6) + (hash << bl.sizeExp) - hash
|
||||
// }
|
||||
// h = hash >> bl.shift
|
||||
// l = hash << bl.shift >> bl.shift
|
||||
// return l, h
|
||||
// }
|
||||
|
||||
// Update: found sipHash of Jean-Philippe Aumasson & Daniel J. Bernstein to be even faster than absdbm()
|
||||
// https://131002.net/siphash/
|
||||
// siphash was implemented for Go by Dmitry Chestnykh https://github.com/dchest/siphash
|
||||
|
||||
// Add
|
||||
// set the bit(s) for entry; Adds an entry to the Bloom filter
|
||||
func (bl *Bloom) Add(entry []byte) {
|
||||
l, h := bl.sipHash(entry)
|
||||
for i := uint64(0); i < bl.setLocs; i++ {
|
||||
bl.set((h + i*l) & bl.size)
|
||||
bl.ElemNum++
|
||||
}
|
||||
}
|
||||
|
||||
// AddTS
|
||||
// Thread safe: Mutex.Lock the bloomfilter for the time of processing the entry
|
||||
func (bl *Bloom) AddTS(entry []byte) {
|
||||
bl.Mtx.Lock()
|
||||
defer bl.Mtx.Unlock()
|
||||
bl.Add(entry)
|
||||
}
|
||||
|
||||
// Has
|
||||
// check if bit(s) for entry is/are set
|
||||
// returns true if the entry was added to the Bloom Filter
|
||||
func (bl Bloom) Has(entry []byte) bool {
|
||||
l, h := bl.sipHash(entry)
|
||||
res := true
|
||||
for i := uint64(0); i < bl.setLocs; i++ {
|
||||
res = res && bl.isSet((h+i*l)&bl.size)
|
||||
// https://github.com/ipfs/bbloom/commit/84e8303a9bfb37b2658b85982921d15bbb0fecff
|
||||
// // Branching here (early escape) is not worth it
|
||||
// // This is my conclusion from benchmarks
|
||||
// // (prevents loop unrolling)
|
||||
// switch bl.IsSet((h + i*l) & bl.size) {
|
||||
// case false:
|
||||
// return false
|
||||
// }
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// HasTS
|
||||
// Thread safe: Mutex.Lock the bloomfilter for the time of processing the entry
|
||||
func (bl *Bloom) HasTS(entry []byte) bool {
|
||||
bl.Mtx.Lock()
|
||||
defer bl.Mtx.Unlock()
|
||||
return bl.Has(entry)
|
||||
}
|
||||
|
||||
// AddIfNotHas
|
||||
// Only Add entry if it's not present in the bloomfilter
|
||||
// returns true if entry was added
|
||||
// returns false if entry was allready registered in the bloomfilter
|
||||
func (bl Bloom) AddIfNotHas(entry []byte) (added bool) {
|
||||
if bl.Has(entry) {
|
||||
return added
|
||||
}
|
||||
bl.Add(entry)
|
||||
return true
|
||||
}
|
||||
|
||||
// AddIfNotHasTS
|
||||
// Tread safe: Only Add entry if it's not present in the bloomfilter
|
||||
// returns true if entry was added
|
||||
// returns false if entry was allready registered in the bloomfilter
|
||||
func (bl *Bloom) AddIfNotHasTS(entry []byte) (added bool) {
|
||||
bl.Mtx.Lock()
|
||||
defer bl.Mtx.Unlock()
|
||||
return bl.AddIfNotHas(entry)
|
||||
}
|
||||
|
||||
// Size
|
||||
// make Bloom filter with as bitset of size sz
|
||||
func (bl *Bloom) Size(sz uint64) {
|
||||
bl.bitset = make([]uint64, sz>>6)
|
||||
}
|
||||
|
||||
// Clear
|
||||
// resets the Bloom filter
|
||||
func (bl *Bloom) Clear() {
|
||||
bs := bl.bitset
|
||||
for i := range bs {
|
||||
bs[i] = 0
|
||||
}
|
||||
}
|
||||
|
||||
// Set
|
||||
// set the bit[idx] of bitsit
|
||||
func (bl *Bloom) set(idx uint64) {
|
||||
// ommit unsafe
|
||||
// *(*uint8)(unsafe.Pointer(uintptr(unsafe.Pointer(&bl.bitset[idx>>6])) + uintptr((idx%64)>>3))) |= mask[idx%8]
|
||||
bl.bitset[idx>>6] |= 1 << (idx % 64)
|
||||
}
|
||||
|
||||
// IsSet
|
||||
// check if bit[idx] of bitset is set
|
||||
// returns true/false
|
||||
func (bl *Bloom) isSet(idx uint64) bool {
|
||||
// ommit unsafe
|
||||
// return (((*(*uint8)(unsafe.Pointer(uintptr(unsafe.Pointer(&bl.bitset[idx>>6])) + uintptr((idx%64)>>3)))) >> (idx % 8)) & 1) == 1
|
||||
return bl.bitset[idx>>6]&(1<<(idx%64)) != 0
|
||||
}
|
||||
|
||||
// JSONMarshal
|
||||
// returns JSON-object (type bloomJSONImExport) as []byte
|
||||
func (bl Bloom) JSONMarshal() []byte {
|
||||
bloomImEx := bloomJSONImExport{}
|
||||
bloomImEx.SetLocs = uint64(bl.setLocs)
|
||||
bloomImEx.FilterSet = make([]byte, len(bl.bitset)<<3)
|
||||
for i := range bloomImEx.FilterSet {
|
||||
bloomImEx.FilterSet[i] = *(*byte)(unsafe.Pointer(uintptr(unsafe.Pointer(&bl.bitset[0])) + uintptr(i)))
|
||||
}
|
||||
data, err := json.Marshal(bloomImEx)
|
||||
if err != nil {
|
||||
log.Fatal("json.Marshal failed: ", err)
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
// // alternative hashFn
|
||||
// func (bl Bloom) fnv64a(b *[]byte) (l, h uint64) {
|
||||
// h64 := fnv.New64a()
|
||||
// h64.Write(*b)
|
||||
// hash := h64.Sum64()
|
||||
// h = hash >> 32
|
||||
// l = hash << 32 >> 32
|
||||
// return l, h
|
||||
// }
|
||||
//
|
||||
// // <-- http://partow.net/programming/hashfunctions/index.html
|
||||
// // citation: An algorithm proposed by Donald E. Knuth in The Art Of Computer Programming Volume 3,
|
||||
// // under the topic of sorting and search chapter 6.4.
|
||||
// // modified to fit with boolset-length
|
||||
// func (bl Bloom) DEKHash(b *[]byte) (l, h uint64) {
|
||||
// hash := uint64(len(*b))
|
||||
// for _, c := range *b {
|
||||
// hash = ((hash << 5) ^ (hash >> bl.shift)) ^ uint64(c)
|
||||
// }
|
||||
// h = hash >> bl.shift
|
||||
// l = hash << bl.sizeExp >> bl.sizeExp
|
||||
// return l, h
|
||||
// }
|
||||
225
vendor/github.com/AndreasBriese/bbloom/sipHash.go
generated
vendored
225
vendor/github.com/AndreasBriese/bbloom/sipHash.go
generated
vendored
@@ -1,225 +0,0 @@
|
||||
// Written in 2012 by Dmitry Chestnykh.
|
||||
//
|
||||
// To the extent possible under law, the author have dedicated all copyright
|
||||
// and related and neighboring rights to this software to the public domain
|
||||
// worldwide. This software is distributed without any warranty.
|
||||
// http://creativecommons.org/publicdomain/zero/1.0/
|
||||
//
|
||||
// Package siphash implements SipHash-2-4, a fast short-input PRF
|
||||
// created by Jean-Philippe Aumasson and Daniel J. Bernstein.
|
||||
|
||||
package bbloom
|
||||
|
||||
// Hash returns the 64-bit SipHash-2-4 of the given byte slice with two 64-bit
|
||||
// parts of 128-bit key: k0 and k1.
|
||||
func (bl Bloom) sipHash(p []byte) (l, h uint64) {
|
||||
// Initialization.
|
||||
v0 := uint64(8317987320269560794) // k0 ^ 0x736f6d6570736575
|
||||
v1 := uint64(7237128889637516672) // k1 ^ 0x646f72616e646f6d
|
||||
v2 := uint64(7816392314733513934) // k0 ^ 0x6c7967656e657261
|
||||
v3 := uint64(8387220255325274014) // k1 ^ 0x7465646279746573
|
||||
t := uint64(len(p)) << 56
|
||||
|
||||
// Compression.
|
||||
for len(p) >= 8 {
|
||||
|
||||
m := uint64(p[0]) | uint64(p[1])<<8 | uint64(p[2])<<16 | uint64(p[3])<<24 |
|
||||
uint64(p[4])<<32 | uint64(p[5])<<40 | uint64(p[6])<<48 | uint64(p[7])<<56
|
||||
|
||||
v3 ^= m
|
||||
|
||||
// Round 1.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
// Round 2.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
v0 ^= m
|
||||
p = p[8:]
|
||||
}
|
||||
|
||||
// Compress last block.
|
||||
switch len(p) {
|
||||
case 7:
|
||||
t |= uint64(p[6]) << 48
|
||||
fallthrough
|
||||
case 6:
|
||||
t |= uint64(p[5]) << 40
|
||||
fallthrough
|
||||
case 5:
|
||||
t |= uint64(p[4]) << 32
|
||||
fallthrough
|
||||
case 4:
|
||||
t |= uint64(p[3]) << 24
|
||||
fallthrough
|
||||
case 3:
|
||||
t |= uint64(p[2]) << 16
|
||||
fallthrough
|
||||
case 2:
|
||||
t |= uint64(p[1]) << 8
|
||||
fallthrough
|
||||
case 1:
|
||||
t |= uint64(p[0])
|
||||
}
|
||||
|
||||
v3 ^= t
|
||||
|
||||
// Round 1.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
// Round 2.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
v0 ^= t
|
||||
|
||||
// Finalization.
|
||||
v2 ^= 0xff
|
||||
|
||||
// Round 1.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
// Round 2.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
// Round 3.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
// Round 4.
|
||||
v0 += v1
|
||||
v1 = v1<<13 | v1>>51
|
||||
v1 ^= v0
|
||||
v0 = v0<<32 | v0>>32
|
||||
|
||||
v2 += v3
|
||||
v3 = v3<<16 | v3>>48
|
||||
v3 ^= v2
|
||||
|
||||
v0 += v3
|
||||
v3 = v3<<21 | v3>>43
|
||||
v3 ^= v0
|
||||
|
||||
v2 += v1
|
||||
v1 = v1<<17 | v1>>47
|
||||
v1 ^= v2
|
||||
v2 = v2<<32 | v2>>32
|
||||
|
||||
// return v0 ^ v1 ^ v2 ^ v3
|
||||
|
||||
hash := v0 ^ v1 ^ v2 ^ v3
|
||||
h = hash >> bl.shift
|
||||
l = hash << bl.shift >> bl.shift
|
||||
return l, h
|
||||
|
||||
}
|
||||
140
vendor/github.com/AndreasBriese/bbloom/words.txt
generated
vendored
140
vendor/github.com/AndreasBriese/bbloom/words.txt
generated
vendored
@@ -1,140 +0,0 @@
|
||||
2014/01/01 00:00:00 /info.html
|
||||
2014/01/01 00:00:00 /info.html
|
||||
2014/01/01 00:00:01 /info.html
|
||||
2014/01/01 00:00:02 /info.html
|
||||
2014/01/01 00:00:03 /info.html
|
||||
2014/01/01 00:00:04 /info.html
|
||||
2014/01/01 00:00:05 /info.html
|
||||
2014/01/01 00:00:06 /info.html
|
||||
2014/01/01 00:00:07 /info.html
|
||||
2014/01/01 00:00:08 /info.html
|
||||
2014/01/01 00:00:09 /info.html
|
||||
2014/01/01 00:00:10 /info.html
|
||||
2014/01/01 00:00:11 /info.html
|
||||
2014/01/01 00:00:12 /info.html
|
||||
2014/01/01 00:00:13 /info.html
|
||||
2014/01/01 00:00:14 /info.html
|
||||
2014/01/01 00:00:15 /info.html
|
||||
2014/01/01 00:00:16 /info.html
|
||||
2014/01/01 00:00:17 /info.html
|
||||
2014/01/01 00:00:18 /info.html
|
||||
2014/01/01 00:00:19 /info.html
|
||||
2014/01/01 00:00:20 /info.html
|
||||
2014/01/01 00:00:21 /info.html
|
||||
2014/01/01 00:00:22 /info.html
|
||||
2014/01/01 00:00:23 /info.html
|
||||
2014/01/01 00:00:24 /info.html
|
||||
2014/01/01 00:00:25 /info.html
|
||||
2014/01/01 00:00:26 /info.html
|
||||
2014/01/01 00:00:27 /info.html
|
||||
2014/01/01 00:00:28 /info.html
|
||||
2014/01/01 00:00:29 /info.html
|
||||
2014/01/01 00:00:30 /info.html
|
||||
2014/01/01 00:00:31 /info.html
|
||||
2014/01/01 00:00:32 /info.html
|
||||
2014/01/01 00:00:33 /info.html
|
||||
2014/01/01 00:00:34 /info.html
|
||||
2014/01/01 00:00:35 /info.html
|
||||
2014/01/01 00:00:36 /info.html
|
||||
2014/01/01 00:00:37 /info.html
|
||||
2014/01/01 00:00:38 /info.html
|
||||
2014/01/01 00:00:39 /info.html
|
||||
2014/01/01 00:00:40 /info.html
|
||||
2014/01/01 00:00:41 /info.html
|
||||
2014/01/01 00:00:42 /info.html
|
||||
2014/01/01 00:00:43 /info.html
|
||||
2014/01/01 00:00:44 /info.html
|
||||
2014/01/01 00:00:45 /info.html
|
||||
2014/01/01 00:00:46 /info.html
|
||||
2014/01/01 00:00:47 /info.html
|
||||
2014/01/01 00:00:48 /info.html
|
||||
2014/01/01 00:00:49 /info.html
|
||||
2014/01/01 00:00:50 /info.html
|
||||
2014/01/01 00:00:51 /info.html
|
||||
2014/01/01 00:00:52 /info.html
|
||||
2014/01/01 00:00:53 /info.html
|
||||
2014/01/01 00:00:54 /info.html
|
||||
2014/01/01 00:00:55 /info.html
|
||||
2014/01/01 00:00:56 /info.html
|
||||
2014/01/01 00:00:57 /info.html
|
||||
2014/01/01 00:00:58 /info.html
|
||||
2014/01/01 00:00:59 /info.html
|
||||
2014/01/01 00:01:00 /info.html
|
||||
2014/01/01 00:01:01 /info.html
|
||||
2014/01/01 00:01:02 /info.html
|
||||
2014/01/01 00:01:03 /info.html
|
||||
2014/01/01 00:01:04 /info.html
|
||||
2014/01/01 00:01:05 /info.html
|
||||
2014/01/01 00:01:06 /info.html
|
||||
2014/01/01 00:01:07 /info.html
|
||||
2014/01/01 00:01:08 /info.html
|
||||
2014/01/01 00:01:09 /info.html
|
||||
2014/01/01 00:01:10 /info.html
|
||||
2014/01/01 00:01:11 /info.html
|
||||
2014/01/01 00:01:12 /info.html
|
||||
2014/01/01 00:01:13 /info.html
|
||||
2014/01/01 00:01:14 /info.html
|
||||
2014/01/01 00:01:15 /info.html
|
||||
2014/01/01 00:01:16 /info.html
|
||||
2014/01/01 00:01:17 /info.html
|
||||
2014/01/01 00:01:18 /info.html
|
||||
2014/01/01 00:01:19 /info.html
|
||||
2014/01/01 00:01:20 /info.html
|
||||
2014/01/01 00:01:21 /info.html
|
||||
2014/01/01 00:01:22 /info.html
|
||||
2014/01/01 00:01:23 /info.html
|
||||
2014/01/01 00:01:24 /info.html
|
||||
2014/01/01 00:01:25 /info.html
|
||||
2014/01/01 00:01:26 /info.html
|
||||
2014/01/01 00:01:27 /info.html
|
||||
2014/01/01 00:01:28 /info.html
|
||||
2014/01/01 00:01:29 /info.html
|
||||
2014/01/01 00:01:30 /info.html
|
||||
2014/01/01 00:01:31 /info.html
|
||||
2014/01/01 00:01:32 /info.html
|
||||
2014/01/01 00:01:33 /info.html
|
||||
2014/01/01 00:01:34 /info.html
|
||||
2014/01/01 00:01:35 /info.html
|
||||
2014/01/01 00:01:36 /info.html
|
||||
2014/01/01 00:01:37 /info.html
|
||||
2014/01/01 00:01:38 /info.html
|
||||
2014/01/01 00:01:39 /info.html
|
||||
2014/01/01 00:01:40 /info.html
|
||||
2014/01/01 00:01:41 /info.html
|
||||
2014/01/01 00:01:42 /info.html
|
||||
2014/01/01 00:01:43 /info.html
|
||||
2014/01/01 00:01:44 /info.html
|
||||
2014/01/01 00:01:45 /info.html
|
||||
2014/01/01 00:01:46 /info.html
|
||||
2014/01/01 00:01:47 /info.html
|
||||
2014/01/01 00:01:48 /info.html
|
||||
2014/01/01 00:01:49 /info.html
|
||||
2014/01/01 00:01:50 /info.html
|
||||
2014/01/01 00:01:51 /info.html
|
||||
2014/01/01 00:01:52 /info.html
|
||||
2014/01/01 00:01:53 /info.html
|
||||
2014/01/01 00:01:54 /info.html
|
||||
2014/01/01 00:01:55 /info.html
|
||||
2014/01/01 00:01:56 /info.html
|
||||
2014/01/01 00:01:57 /info.html
|
||||
2014/01/01 00:01:58 /info.html
|
||||
2014/01/01 00:01:59 /info.html
|
||||
2014/01/01 00:02:00 /info.html
|
||||
2014/01/01 00:02:01 /info.html
|
||||
2014/01/01 00:02:02 /info.html
|
||||
2014/01/01 00:02:03 /info.html
|
||||
2014/01/01 00:02:04 /info.html
|
||||
2014/01/01 00:02:05 /info.html
|
||||
2014/01/01 00:02:06 /info.html
|
||||
2014/01/01 00:02:07 /info.html
|
||||
2014/01/01 00:02:08 /info.html
|
||||
2014/01/01 00:02:09 /info.html
|
||||
2014/01/01 00:02:10 /info.html
|
||||
2014/01/01 00:02:11 /info.html
|
||||
2014/01/01 00:02:12 /info.html
|
||||
2014/01/01 00:02:13 /info.html
|
||||
2014/01/01 00:02:14 /info.html
|
||||
2014/01/01 00:02:15 /info.html
|
||||
2014/01/01 00:02:16 /info.html
|
||||
2014/01/01 00:02:17 /info.html
|
||||
2014/01/01 00:02:18 /info.html
|
||||
24
vendor/github.com/alicebob/gopher-json/LICENSE
generated
vendored
24
vendor/github.com/alicebob/gopher-json/LICENSE
generated
vendored
@@ -1,24 +0,0 @@
|
||||
This is free and unencumbered software released into the public domain.
|
||||
|
||||
Anyone is free to copy, modify, publish, use, compile, sell, or
|
||||
distribute this software, either in source code form or as a compiled
|
||||
binary, for any purpose, commercial or non-commercial, and by any
|
||||
means.
|
||||
|
||||
In jurisdictions that recognize copyright laws, the author or authors
|
||||
of this software dedicate any and all copyright interest in the
|
||||
software to the public domain. We make this dedication for the benefit
|
||||
of the public at large and to the detriment of our heirs and
|
||||
successors. We intend this dedication to be an overt act of
|
||||
relinquishment in perpetuity of all present and future rights to this
|
||||
software under copyright law.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
For more information, please refer to <http://unlicense.org/>
|
||||
7
vendor/github.com/alicebob/gopher-json/README.md
generated
vendored
7
vendor/github.com/alicebob/gopher-json/README.md
generated
vendored
@@ -1,7 +0,0 @@
|
||||
# gopher-json [](https://godoc.org/layeh.com/gopher-json)
|
||||
|
||||
Package json is a simple JSON encoder/decoder for [gopher-lua](https://github.com/yuin/gopher-lua).
|
||||
|
||||
## License
|
||||
|
||||
Public domain
|
||||
33
vendor/github.com/alicebob/gopher-json/doc.go
generated
vendored
33
vendor/github.com/alicebob/gopher-json/doc.go
generated
vendored
@@ -1,33 +0,0 @@
|
||||
// Package json is a simple JSON encoder/decoder for gopher-lua.
|
||||
//
|
||||
// Documentation
|
||||
//
|
||||
// The following functions are exposed by the library:
|
||||
// decode(string): Decodes a JSON string. Returns nil and an error string if
|
||||
// the string could not be decoded.
|
||||
// encode(value): Encodes a value into a JSON string. Returns nil and an error
|
||||
// string if the value could not be encoded.
|
||||
//
|
||||
// The following types are supported:
|
||||
//
|
||||
// Lua | JSON
|
||||
// ---------+-----
|
||||
// nil | null
|
||||
// number | number
|
||||
// string | string
|
||||
// table | object: when table is non-empty and has only string keys
|
||||
// | array: when table is empty, or has only sequential numeric keys
|
||||
// | starting from 1
|
||||
//
|
||||
// Attempting to encode any other Lua type will result in an error.
|
||||
//
|
||||
// Example
|
||||
//
|
||||
// Below is an example usage of the library:
|
||||
// import (
|
||||
// luajson "layeh.com/gopher-json"
|
||||
// )
|
||||
//
|
||||
// L := lua.NewState()
|
||||
// luajson.Preload(s)
|
||||
package json
|
||||
189
vendor/github.com/alicebob/gopher-json/json.go
generated
vendored
189
vendor/github.com/alicebob/gopher-json/json.go
generated
vendored
@@ -1,189 +0,0 @@
|
||||
package json
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
|
||||
"github.com/yuin/gopher-lua"
|
||||
)
|
||||
|
||||
// Preload adds json to the given Lua state's package.preload table. After it
|
||||
// has been preloaded, it can be loaded using require:
|
||||
//
|
||||
// local json = require("json")
|
||||
func Preload(L *lua.LState) {
|
||||
L.PreloadModule("json", Loader)
|
||||
}
|
||||
|
||||
// Loader is the module loader function.
|
||||
func Loader(L *lua.LState) int {
|
||||
t := L.NewTable()
|
||||
L.SetFuncs(t, api)
|
||||
L.Push(t)
|
||||
return 1
|
||||
}
|
||||
|
||||
var api = map[string]lua.LGFunction{
|
||||
"decode": apiDecode,
|
||||
"encode": apiEncode,
|
||||
}
|
||||
|
||||
func apiDecode(L *lua.LState) int {
|
||||
if L.GetTop() != 1 {
|
||||
L.Error(lua.LString("bad argument #1 to decode"), 1)
|
||||
return 0
|
||||
}
|
||||
str := L.CheckString(1)
|
||||
|
||||
value, err := Decode(L, []byte(str))
|
||||
if err != nil {
|
||||
L.Push(lua.LNil)
|
||||
L.Push(lua.LString(err.Error()))
|
||||
return 2
|
||||
}
|
||||
L.Push(value)
|
||||
return 1
|
||||
}
|
||||
|
||||
func apiEncode(L *lua.LState) int {
|
||||
if L.GetTop() != 1 {
|
||||
L.Error(lua.LString("bad argument #1 to encode"), 1)
|
||||
return 0
|
||||
}
|
||||
value := L.CheckAny(1)
|
||||
|
||||
data, err := Encode(value)
|
||||
if err != nil {
|
||||
L.Push(lua.LNil)
|
||||
L.Push(lua.LString(err.Error()))
|
||||
return 2
|
||||
}
|
||||
L.Push(lua.LString(string(data)))
|
||||
return 1
|
||||
}
|
||||
|
||||
var (
|
||||
errNested = errors.New("cannot encode recursively nested tables to JSON")
|
||||
errSparseArray = errors.New("cannot encode sparse array")
|
||||
errInvalidKeys = errors.New("cannot encode mixed or invalid key types")
|
||||
)
|
||||
|
||||
type invalidTypeError lua.LValueType
|
||||
|
||||
func (i invalidTypeError) Error() string {
|
||||
return `cannot encode ` + lua.LValueType(i).String() + ` to JSON`
|
||||
}
|
||||
|
||||
// Encode returns the JSON encoding of value.
|
||||
func Encode(value lua.LValue) ([]byte, error) {
|
||||
return json.Marshal(jsonValue{
|
||||
LValue: value,
|
||||
visited: make(map[*lua.LTable]bool),
|
||||
})
|
||||
}
|
||||
|
||||
type jsonValue struct {
|
||||
lua.LValue
|
||||
visited map[*lua.LTable]bool
|
||||
}
|
||||
|
||||
func (j jsonValue) MarshalJSON() (data []byte, err error) {
|
||||
switch converted := j.LValue.(type) {
|
||||
case lua.LBool:
|
||||
data, err = json.Marshal(bool(converted))
|
||||
case lua.LNumber:
|
||||
data, err = json.Marshal(float64(converted))
|
||||
case *lua.LNilType:
|
||||
data = []byte(`null`)
|
||||
case lua.LString:
|
||||
data, err = json.Marshal(string(converted))
|
||||
case *lua.LTable:
|
||||
if j.visited[converted] {
|
||||
return nil, errNested
|
||||
}
|
||||
j.visited[converted] = true
|
||||
|
||||
key, value := converted.Next(lua.LNil)
|
||||
|
||||
switch key.Type() {
|
||||
case lua.LTNil: // empty table
|
||||
data = []byte(`[]`)
|
||||
case lua.LTNumber:
|
||||
arr := make([]jsonValue, 0, converted.Len())
|
||||
expectedKey := lua.LNumber(1)
|
||||
for key != lua.LNil {
|
||||
if key.Type() != lua.LTNumber {
|
||||
err = errInvalidKeys
|
||||
return
|
||||
}
|
||||
if expectedKey != key {
|
||||
err = errSparseArray
|
||||
return
|
||||
}
|
||||
arr = append(arr, jsonValue{value, j.visited})
|
||||
expectedKey++
|
||||
key, value = converted.Next(key)
|
||||
}
|
||||
data, err = json.Marshal(arr)
|
||||
case lua.LTString:
|
||||
obj := make(map[string]jsonValue)
|
||||
for key != lua.LNil {
|
||||
if key.Type() != lua.LTString {
|
||||
err = errInvalidKeys
|
||||
return
|
||||
}
|
||||
obj[key.String()] = jsonValue{value, j.visited}
|
||||
key, value = converted.Next(key)
|
||||
}
|
||||
data, err = json.Marshal(obj)
|
||||
default:
|
||||
err = errInvalidKeys
|
||||
}
|
||||
default:
|
||||
err = invalidTypeError(j.LValue.Type())
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Decode converts the JSON encoded data to Lua values.
|
||||
func Decode(L *lua.LState, data []byte) (lua.LValue, error) {
|
||||
var value interface{}
|
||||
err := json.Unmarshal(data, &value)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return DecodeValue(L, value), nil
|
||||
}
|
||||
|
||||
// DecodeValue converts the value to a Lua value.
|
||||
//
|
||||
// This function only converts values that the encoding/json package decodes to.
|
||||
// All other values will return lua.LNil.
|
||||
func DecodeValue(L *lua.LState, value interface{}) lua.LValue {
|
||||
switch converted := value.(type) {
|
||||
case bool:
|
||||
return lua.LBool(converted)
|
||||
case float64:
|
||||
return lua.LNumber(converted)
|
||||
case string:
|
||||
return lua.LString(converted)
|
||||
case json.Number:
|
||||
return lua.LString(converted)
|
||||
case []interface{}:
|
||||
arr := L.CreateTable(len(converted), 0)
|
||||
for _, item := range converted {
|
||||
arr.Append(DecodeValue(L, item))
|
||||
}
|
||||
return arr
|
||||
case map[string]interface{}:
|
||||
tbl := L.CreateTable(0, len(converted))
|
||||
for key, item := range converted {
|
||||
tbl.RawSetH(lua.LString(key), DecodeValue(L, item))
|
||||
}
|
||||
return tbl
|
||||
case nil:
|
||||
return lua.LNil
|
||||
}
|
||||
|
||||
return lua.LNil
|
||||
}
|
||||
6
vendor/github.com/alicebob/miniredis/v2/.gitignore
generated
vendored
6
vendor/github.com/alicebob/miniredis/v2/.gitignore
generated
vendored
@@ -1,6 +0,0 @@
|
||||
/integration/redis_src/
|
||||
/integration/dump.rdb
|
||||
*.swp
|
||||
/integration/nodes.conf
|
||||
.idea/
|
||||
miniredis.iml
|
||||
225
vendor/github.com/alicebob/miniredis/v2/CHANGELOG.md
generated
vendored
225
vendor/github.com/alicebob/miniredis/v2/CHANGELOG.md
generated
vendored
@@ -1,225 +0,0 @@
|
||||
## Changelog
|
||||
|
||||
|
||||
### v2.23.0
|
||||
|
||||
- basic INFO support (thanks @kirill-a-belov)
|
||||
- support COUNT in SSCAN (thanks @Abdi-dd)
|
||||
- test and support Go 1.19
|
||||
- support LPOS (thanks @ianstarz)
|
||||
- support XPENDING, XGROUP {CREATECONSUMER,DESTROY,DELCONSUMER}, XINFO {CONSUMERS,GROUPS}, XCLAIM (thanks @sandyharvie)
|
||||
|
||||
|
||||
### v2.22.0
|
||||
|
||||
- set miniredis.DumpMaxLineLen to get more Dump() info (thanks @afjoseph)
|
||||
- fix invalid resposne of COMMAND (thanks @zsh1995)
|
||||
- fix possibility to generate duplicate IDs in XADD (thanks @readams)
|
||||
- adds support for XAUTOCLAIM min-idle parameter (thanks @readams)
|
||||
|
||||
|
||||
### v2.21.0
|
||||
|
||||
- support for GETEX (thanks @dntj)
|
||||
- support for GT and LT in ZADD (thanks @lsgndln)
|
||||
- support for XAUTOCLAIM (thanks @randall-fulton)
|
||||
|
||||
|
||||
### v2.20.0
|
||||
|
||||
- back to support Go >= 1.14 (thanks @ajatprabha and @marcind)
|
||||
|
||||
|
||||
### v2.19.0
|
||||
|
||||
- support for TYPE in SCAN (thanks @0xDiddi)
|
||||
- update BITPOS (thanks @dirkm)
|
||||
- fix a lua redis.call() return value (thanks @mpetronic)
|
||||
- update ZRANGE (thanks @valdemarpereira)
|
||||
|
||||
|
||||
### v2.18.0
|
||||
|
||||
- support for ZUNION (thanks @propan)
|
||||
- support for COPY (thanks @matiasinsaurralde and @rockitbaby)
|
||||
- support for LMOVE (thanks @btwear)
|
||||
|
||||
|
||||
### v2.17.0
|
||||
|
||||
- added miniredis.RunT(t)
|
||||
|
||||
|
||||
### v2.16.1
|
||||
|
||||
- fix ZINTERSTORE with wets (thanks @lingjl2010 and @okhowang)
|
||||
- fix exclusive ranges in XRANGE (thanks @joseotoro)
|
||||
|
||||
|
||||
### v2.16.0
|
||||
|
||||
- simplify some code (thanks @zonque)
|
||||
- support for EXAT/PXAT in SET
|
||||
- support for XTRIM (thanks @joseotoro)
|
||||
- support for ZRANDMEMBER
|
||||
- support for redis.log() in lua (thanks @dirkm)
|
||||
|
||||
|
||||
### v2.15.2
|
||||
|
||||
- Fix race condition in blocking code (thanks @zonque and @robx)
|
||||
- XREAD accepts '$' as ID (thanks @bradengroom)
|
||||
|
||||
|
||||
### v2.15.1
|
||||
|
||||
- EVAL should cache the script (thanks @guoshimin)
|
||||
|
||||
|
||||
### v2.15.0
|
||||
|
||||
- target redis 6.2 and added new args to various commands
|
||||
- support for all hyperlog commands (thanks @ilbaktin)
|
||||
- support for GETDEL (thanks @wszaranski)
|
||||
|
||||
|
||||
### v2.14.5
|
||||
|
||||
- added XPENDING
|
||||
- support for BLOCK option in XREAD and XREADGROUP
|
||||
|
||||
|
||||
### v2.14.4
|
||||
|
||||
- fix BITPOS error (thanks @xiaoyuzdy)
|
||||
- small fixes for XREAD, XACK, and XDEL. Mostly error cases.
|
||||
- fix empty EXEC return type (thanks @ashanbrown)
|
||||
- fix XDEL (thanks @svakili and @yvesf)
|
||||
- fix FLUSHALL for streams (thanks @svakili)
|
||||
|
||||
|
||||
### v2.14.3
|
||||
|
||||
- fix problem where Lua code didn't set the selected DB
|
||||
- update to redis 6.0.10 (thanks @lazappa)
|
||||
|
||||
|
||||
### v2.14.2
|
||||
|
||||
- update LUA dependency
|
||||
- deal with (p)unsubscribe when there are no channels
|
||||
|
||||
|
||||
### v2.14.1
|
||||
|
||||
- mod tidy
|
||||
|
||||
|
||||
### v2.14.0
|
||||
|
||||
- support for HELLO and the RESP3 protocol
|
||||
- KEEPTTL in SET (thanks @johnpena)
|
||||
|
||||
|
||||
### v2.13.3
|
||||
|
||||
- support Go 1.14 and 1.15
|
||||
- update the `Check...()` methods
|
||||
- support for XREAD (thanks @pieterlexis)
|
||||
|
||||
|
||||
### v2.13.2
|
||||
|
||||
- Use SAN instead of CN in self signed cert for testing (thanks @johejo)
|
||||
- Travis CI now tests against the most recent two versions of Go (thanks @johejo)
|
||||
- changed unit and integration tests to compare raw payloads, not parsed payloads
|
||||
- remove "redigo" dependency
|
||||
|
||||
|
||||
### v2.13.1
|
||||
|
||||
- added HSTRLEN
|
||||
- minimal support for ACL users in AUTH
|
||||
|
||||
|
||||
### v2.13.0
|
||||
|
||||
- added RunTLS(...)
|
||||
- added SetError(...)
|
||||
|
||||
|
||||
### v2.12.0
|
||||
|
||||
- redis 6
|
||||
- Lua json update (thanks @gsmith85)
|
||||
- CLUSTER commands (thanks @kratisto)
|
||||
- fix TOUCH
|
||||
- fix a shutdown race condition
|
||||
|
||||
|
||||
### v2.11.4
|
||||
|
||||
- ZUNIONSTORE now supports standard set types (thanks @wshirey)
|
||||
|
||||
|
||||
### v2.11.3
|
||||
|
||||
- support for TOUCH (thanks @cleroux)
|
||||
- support for cluster and stream commands (thanks @kak-tus)
|
||||
|
||||
|
||||
### v2.11.2
|
||||
|
||||
- make sure Lua code is executed concurrently
|
||||
- add command GEORADIUSBYMEMBER (thanks @kyeett)
|
||||
|
||||
|
||||
### v2.11.1
|
||||
|
||||
- globals protection for Lua code (thanks @vk-outreach)
|
||||
- HSET update (thanks @carlgreen)
|
||||
- fix BLPOP block on shutdown (thanks @Asalle)
|
||||
|
||||
|
||||
### v2.11.0
|
||||
|
||||
- added XRANGE/XREVRANGE, XADD, and XLEN (thanks @skateinmars)
|
||||
- added GEODIST
|
||||
- improved precision for geohashes, closer to what real redis does
|
||||
- use 128bit floats internally for INCRBYFLOAT and related (thanks @timnd)
|
||||
|
||||
|
||||
### v2.10.1
|
||||
|
||||
- added m.Server()
|
||||
|
||||
|
||||
### v2.10.0
|
||||
|
||||
- added UNLINK
|
||||
- fix DEL zero-argument case
|
||||
- cleanup some direct access commands
|
||||
- added GEOADD, GEOPOS, GEORADIUS, and GEORADIUS_RO
|
||||
|
||||
|
||||
### v2.9.1
|
||||
|
||||
- fix issue with ZRANGEBYLEX
|
||||
- fix issue with BRPOPLPUSH and direct access
|
||||
|
||||
|
||||
### v2.9.0
|
||||
|
||||
- proper versioned import of github.com/gomodule/redigo (thanks @yfei1)
|
||||
- fix messages generated by PSUBSCRIBE
|
||||
- optional internal seed (thanks @zikaeroh)
|
||||
|
||||
|
||||
### v2.8.0
|
||||
|
||||
Proper `v2` in go.mod.
|
||||
|
||||
|
||||
### older
|
||||
|
||||
See https://github.com/alicebob/miniredis/releases for the full changelog
|
||||
21
vendor/github.com/alicebob/miniredis/v2/LICENSE
generated
vendored
21
vendor/github.com/alicebob/miniredis/v2/LICENSE
generated
vendored
@@ -1,21 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Harmen
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
12
vendor/github.com/alicebob/miniredis/v2/Makefile
generated
vendored
12
vendor/github.com/alicebob/miniredis/v2/Makefile
generated
vendored
@@ -1,12 +0,0 @@
|
||||
.PHONY: all test testrace int
|
||||
|
||||
all: test
|
||||
|
||||
test:
|
||||
go test ./...
|
||||
|
||||
testrace:
|
||||
go test -race ./...
|
||||
|
||||
int:
|
||||
${MAKE} -C integration all
|
||||
333
vendor/github.com/alicebob/miniredis/v2/README.md
generated
vendored
333
vendor/github.com/alicebob/miniredis/v2/README.md
generated
vendored
@@ -1,333 +0,0 @@
|
||||
# Miniredis
|
||||
|
||||
Pure Go Redis test server, used in Go unittests.
|
||||
|
||||
|
||||
##
|
||||
|
||||
Sometimes you want to test code which uses Redis, without making it a full-blown
|
||||
integration test.
|
||||
Miniredis implements (parts of) the Redis server, to be used in unittests. It
|
||||
enables a simple, cheap, in-memory, Redis replacement, with a real TCP interface. Think of it as the Redis version of `net/http/httptest`.
|
||||
|
||||
It saves you from using mock code, and since the redis server lives in the
|
||||
test process you can query for values directly, without going through the server
|
||||
stack.
|
||||
|
||||
There are no dependencies on external binaries, so you can easily integrate it in automated build processes.
|
||||
|
||||
Be sure to import v2:
|
||||
```
|
||||
import "github.com/alicebob/miniredis/v2"
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
Implemented commands:
|
||||
|
||||
- Connection (complete)
|
||||
- AUTH -- see RequireAuth()
|
||||
- ECHO
|
||||
- HELLO -- see RequireUserAuth()
|
||||
- PING
|
||||
- SELECT
|
||||
- SWAPDB
|
||||
- QUIT
|
||||
- Key
|
||||
- COPY
|
||||
- DEL
|
||||
- EXISTS
|
||||
- EXPIRE
|
||||
- EXPIREAT
|
||||
- KEYS
|
||||
- MOVE
|
||||
- PERSIST
|
||||
- PEXPIRE
|
||||
- PEXPIREAT
|
||||
- PTTL
|
||||
- RENAME
|
||||
- RENAMENX
|
||||
- RANDOMKEY -- see m.Seed(...)
|
||||
- SCAN
|
||||
- TOUCH
|
||||
- TTL
|
||||
- TYPE
|
||||
- UNLINK
|
||||
- Transactions (complete)
|
||||
- DISCARD
|
||||
- EXEC
|
||||
- MULTI
|
||||
- UNWATCH
|
||||
- WATCH
|
||||
- Server
|
||||
- DBSIZE
|
||||
- FLUSHALL
|
||||
- FLUSHDB
|
||||
- TIME -- returns time.Now() or value set by SetTime()
|
||||
- COMMAND -- partly
|
||||
- INFO -- partly, returns only "clients" section with one field "connected_clients"
|
||||
- String keys (complete)
|
||||
- APPEND
|
||||
- BITCOUNT
|
||||
- BITOP
|
||||
- BITPOS
|
||||
- DECR
|
||||
- DECRBY
|
||||
- GET
|
||||
- GETBIT
|
||||
- GETRANGE
|
||||
- GETSET
|
||||
- GETDEL
|
||||
- GETEX
|
||||
- INCR
|
||||
- INCRBY
|
||||
- INCRBYFLOAT
|
||||
- MGET
|
||||
- MSET
|
||||
- MSETNX
|
||||
- PSETEX
|
||||
- SET
|
||||
- SETBIT
|
||||
- SETEX
|
||||
- SETNX
|
||||
- SETRANGE
|
||||
- STRLEN
|
||||
- Hash keys (complete)
|
||||
- HDEL
|
||||
- HEXISTS
|
||||
- HGET
|
||||
- HGETALL
|
||||
- HINCRBY
|
||||
- HINCRBYFLOAT
|
||||
- HKEYS
|
||||
- HLEN
|
||||
- HMGET
|
||||
- HMSET
|
||||
- HSET
|
||||
- HSETNX
|
||||
- HSTRLEN
|
||||
- HVALS
|
||||
- HSCAN
|
||||
- List keys (complete)
|
||||
- BLPOP
|
||||
- BRPOP
|
||||
- BRPOPLPUSH
|
||||
- LINDEX
|
||||
- LINSERT
|
||||
- LLEN
|
||||
- LPOP
|
||||
- LPUSH
|
||||
- LPUSHX
|
||||
- LRANGE
|
||||
- LREM
|
||||
- LSET
|
||||
- LTRIM
|
||||
- RPOP
|
||||
- RPOPLPUSH
|
||||
- RPUSH
|
||||
- RPUSHX
|
||||
- LMOVE
|
||||
- Pub/Sub (complete)
|
||||
- PSUBSCRIBE
|
||||
- PUBLISH
|
||||
- PUBSUB
|
||||
- PUNSUBSCRIBE
|
||||
- SUBSCRIBE
|
||||
- UNSUBSCRIBE
|
||||
- Set keys (complete)
|
||||
- SADD
|
||||
- SCARD
|
||||
- SDIFF
|
||||
- SDIFFSTORE
|
||||
- SINTER
|
||||
- SINTERSTORE
|
||||
- SISMEMBER
|
||||
- SMEMBERS
|
||||
- SMOVE
|
||||
- SPOP -- see m.Seed(...)
|
||||
- SRANDMEMBER -- see m.Seed(...)
|
||||
- SREM
|
||||
- SUNION
|
||||
- SUNIONSTORE
|
||||
- SSCAN
|
||||
- Sorted Set keys (complete)
|
||||
- ZADD
|
||||
- ZCARD
|
||||
- ZCOUNT
|
||||
- ZINCRBY
|
||||
- ZINTERSTORE
|
||||
- ZLEXCOUNT
|
||||
- ZPOPMIN
|
||||
- ZPOPMAX
|
||||
- ZRANDMEMBER
|
||||
- ZRANGE
|
||||
- ZRANGEBYLEX
|
||||
- ZRANGEBYSCORE
|
||||
- ZRANK
|
||||
- ZREM
|
||||
- ZREMRANGEBYLEX
|
||||
- ZREMRANGEBYRANK
|
||||
- ZREMRANGEBYSCORE
|
||||
- ZREVRANGE
|
||||
- ZREVRANGEBYLEX
|
||||
- ZREVRANGEBYSCORE
|
||||
- ZREVRANK
|
||||
- ZSCORE
|
||||
- ZUNION
|
||||
- ZUNIONSTORE
|
||||
- ZSCAN
|
||||
- Stream keys
|
||||
- XACK
|
||||
- XADD
|
||||
- XAUTOCLAIM
|
||||
- XCLAIM
|
||||
- XDEL
|
||||
- XGROUP CREATE
|
||||
- XGROUP CREATECONSUMER
|
||||
- XGROUP DESTROY
|
||||
- XGROUP DELCONSUMER
|
||||
- XINFO STREAM -- partly
|
||||
- XINFO GROUPS
|
||||
- XINFO CONSUMERS -- partly
|
||||
- XLEN
|
||||
- XRANGE
|
||||
- XREAD
|
||||
- XREADGROUP
|
||||
- XREVRANGE
|
||||
- XPENDING
|
||||
- XTRIM
|
||||
- Scripting
|
||||
- EVAL
|
||||
- EVALSHA
|
||||
- SCRIPT LOAD
|
||||
- SCRIPT EXISTS
|
||||
- SCRIPT FLUSH
|
||||
- GEO
|
||||
- GEOADD
|
||||
- GEODIST
|
||||
- ~~GEOHASH~~
|
||||
- GEOPOS
|
||||
- GEORADIUS
|
||||
- GEORADIUS_RO
|
||||
- GEORADIUSBYMEMBER
|
||||
- GEORADIUSBYMEMBER_RO
|
||||
- Cluster
|
||||
- CLUSTER SLOTS
|
||||
- CLUSTER KEYSLOT
|
||||
- CLUSTER NODES
|
||||
- HyperLogLog (complete)
|
||||
- PFADD
|
||||
- PFCOUNT
|
||||
- PFMERGE
|
||||
|
||||
|
||||
## TTLs, key expiration, and time
|
||||
|
||||
Since miniredis is intended to be used in unittests TTLs don't decrease
|
||||
automatically. You can use `TTL()` to get the TTL (as a time.Duration) of a
|
||||
key. It will return 0 when no TTL is set.
|
||||
|
||||
`m.FastForward(d)` can be used to decrement all TTLs. All TTLs which become <=
|
||||
0 will be removed.
|
||||
|
||||
EXPIREAT and PEXPIREAT values will be
|
||||
converted to a duration. For that you can either set m.SetTime(t) to use that
|
||||
time as the base for the (P)EXPIREAT conversion, or don't call SetTime(), in
|
||||
which case time.Now() will be used.
|
||||
|
||||
SetTime() also sets the value returned by TIME, which defaults to time.Now().
|
||||
It is not updated by FastForward, only by SetTime.
|
||||
|
||||
## Randomness and Seed()
|
||||
|
||||
Miniredis will use `math/rand`'s global RNG for randomness unless a seed is
|
||||
provided by calling `m.Seed(...)`. If a seed is provided, then miniredis will
|
||||
use its own RNG based on that seed.
|
||||
|
||||
Commands which use randomness are: RANDOMKEY, SPOP, and SRANDMEMBER.
|
||||
|
||||
## Example
|
||||
|
||||
``` Go
|
||||
|
||||
import (
|
||||
...
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
...
|
||||
)
|
||||
|
||||
func TestSomething(t *testing.T) {
|
||||
s := miniredis.RunT(t)
|
||||
|
||||
// Optionally set some keys your code expects:
|
||||
s.Set("foo", "bar")
|
||||
s.HSet("some", "other", "key")
|
||||
|
||||
// Run your code and see if it behaves.
|
||||
// An example using the redigo library from "github.com/gomodule/redigo/redis":
|
||||
c, err := redis.Dial("tcp", s.Addr())
|
||||
_, err = c.Do("SET", "foo", "bar")
|
||||
|
||||
// Optionally check values in redis...
|
||||
if got, err := s.Get("foo"); err != nil || got != "bar" {
|
||||
t.Error("'foo' has the wrong value")
|
||||
}
|
||||
// ... or use a helper for that:
|
||||
s.CheckGet(t, "foo", "bar")
|
||||
|
||||
// TTL and expiration:
|
||||
s.Set("foo", "bar")
|
||||
s.SetTTL("foo", 10*time.Second)
|
||||
s.FastForward(11 * time.Second)
|
||||
if s.Exists("foo") {
|
||||
t.Fatal("'foo' should not have existed anymore")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Not supported
|
||||
|
||||
Commands which will probably not be implemented:
|
||||
|
||||
- CLUSTER (all)
|
||||
- ~~CLUSTER *~~
|
||||
- ~~READONLY~~
|
||||
- ~~READWRITE~~
|
||||
- Key
|
||||
- ~~DUMP~~
|
||||
- ~~MIGRATE~~
|
||||
- ~~OBJECT~~
|
||||
- ~~RESTORE~~
|
||||
- ~~WAIT~~
|
||||
- Scripting
|
||||
- ~~SCRIPT DEBUG~~
|
||||
- ~~SCRIPT KILL~~
|
||||
- Server
|
||||
- ~~BGSAVE~~
|
||||
- ~~BGWRITEAOF~~
|
||||
- ~~CLIENT *~~
|
||||
- ~~CONFIG *~~
|
||||
- ~~DEBUG *~~
|
||||
- ~~LASTSAVE~~
|
||||
- ~~MONITOR~~
|
||||
- ~~ROLE~~
|
||||
- ~~SAVE~~
|
||||
- ~~SHUTDOWN~~
|
||||
- ~~SLAVEOF~~
|
||||
- ~~SLOWLOG~~
|
||||
- ~~SYNC~~
|
||||
|
||||
|
||||
## &c.
|
||||
|
||||
Integration tests are run against Redis 6.2.6. The [./integration](./integration/) subdir
|
||||
compares miniredis against a real redis instance.
|
||||
|
||||
The Redis 6 RESP3 protocol is supported. If there are problems, please open
|
||||
an issue.
|
||||
|
||||
If you want to test Redis Sentinel have a look at [minisentinel](https://github.com/Bose/minisentinel).
|
||||
|
||||
A changelog is kept at [CHANGELOG.md](https://github.com/alicebob/miniredis/blob/master/CHANGELOG.md).
|
||||
|
||||
[](https://pkg.go.dev/github.com/alicebob/miniredis/v2)
|
||||
63
vendor/github.com/alicebob/miniredis/v2/check.go
generated
vendored
63
vendor/github.com/alicebob/miniredis/v2/check.go
generated
vendored
@@ -1,63 +0,0 @@
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// T is implemented by Testing.T
|
||||
type T interface {
|
||||
Helper()
|
||||
Errorf(string, ...interface{})
|
||||
}
|
||||
|
||||
// CheckGet does not call Errorf() iff there is a string key with the
|
||||
// expected value. Normal use case is `m.CheckGet(t, "username", "theking")`.
|
||||
func (m *Miniredis) CheckGet(t T, key, expected string) {
|
||||
t.Helper()
|
||||
|
||||
found, err := m.Get(key)
|
||||
if err != nil {
|
||||
t.Errorf("GET error, key %#v: %v", key, err)
|
||||
return
|
||||
}
|
||||
if found != expected {
|
||||
t.Errorf("GET error, key %#v: Expected %#v, got %#v", key, expected, found)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// CheckList does not call Errorf() iff there is a list key with the
|
||||
// expected values.
|
||||
// Normal use case is `m.CheckGet(t, "favorite_colors", "red", "green", "infrared")`.
|
||||
func (m *Miniredis) CheckList(t T, key string, expected ...string) {
|
||||
t.Helper()
|
||||
|
||||
found, err := m.List(key)
|
||||
if err != nil {
|
||||
t.Errorf("List error, key %#v: %v", key, err)
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(expected, found) {
|
||||
t.Errorf("List error, key %#v: Expected %#v, got %#v", key, expected, found)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// CheckSet does not call Errorf() iff there is a set key with the
|
||||
// expected values.
|
||||
// Normal use case is `m.CheckSet(t, "visited", "Rome", "Stockholm", "Dublin")`.
|
||||
func (m *Miniredis) CheckSet(t T, key string, expected ...string) {
|
||||
t.Helper()
|
||||
|
||||
found, err := m.Members(key)
|
||||
if err != nil {
|
||||
t.Errorf("Set error, key %#v: %v", key, err)
|
||||
return
|
||||
}
|
||||
sort.Strings(expected)
|
||||
if !reflect.DeepEqual(expected, found) {
|
||||
t.Errorf("Set error, key %#v: Expected %#v, got %#v", key, expected, found)
|
||||
return
|
||||
}
|
||||
}
|
||||
67
vendor/github.com/alicebob/miniredis/v2/cmd_cluster.go
generated
vendored
67
vendor/github.com/alicebob/miniredis/v2/cmd_cluster.go
generated
vendored
@@ -1,67 +0,0 @@
|
||||
// Commands from https://redis.io/commands#cluster
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// commandsCluster handles some cluster operations.
|
||||
func commandsCluster(m *Miniredis) {
|
||||
m.srv.Register("CLUSTER", m.cmdCluster)
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdCluster(c *server.Peer, cmd string, args []string) {
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
switch strings.ToUpper(args[0]) {
|
||||
case "SLOTS":
|
||||
m.cmdClusterSlots(c, cmd, args)
|
||||
case "KEYSLOT":
|
||||
m.cmdClusterKeySlot(c, cmd, args)
|
||||
case "NODES":
|
||||
m.cmdClusterNodes(c, cmd, args)
|
||||
default:
|
||||
setDirty(c)
|
||||
c.WriteError(fmt.Sprintf("ERR 'CLUSTER %s' not supported", strings.Join(args, " ")))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// CLUSTER SLOTS
|
||||
func (m *Miniredis) cmdClusterSlots(c *server.Peer, cmd string, args []string) {
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
c.WriteLen(1)
|
||||
c.WriteLen(3)
|
||||
c.WriteInt(0)
|
||||
c.WriteInt(16383)
|
||||
c.WriteLen(3)
|
||||
c.WriteBulk(m.srv.Addr().IP.String())
|
||||
c.WriteInt(m.srv.Addr().Port)
|
||||
c.WriteBulk("09dbe9720cda62f7865eabc5fd8857c5d2678366")
|
||||
})
|
||||
}
|
||||
|
||||
// CLUSTER KEYSLOT
|
||||
func (m *Miniredis) cmdClusterKeySlot(c *server.Peer, cmd string, args []string) {
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
c.WriteInt(163)
|
||||
})
|
||||
}
|
||||
|
||||
// CLUSTER NODES
|
||||
func (m *Miniredis) cmdClusterNodes(c *server.Peer, cmd string, args []string) {
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
c.WriteBulk("e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 127.0.0.1:7000@7000 myself,master - 0 0 1 connected 0-16383")
|
||||
})
|
||||
}
|
||||
2045
vendor/github.com/alicebob/miniredis/v2/cmd_command.go
generated
vendored
2045
vendor/github.com/alicebob/miniredis/v2/cmd_command.go
generated
vendored
File diff suppressed because it is too large
Load Diff
284
vendor/github.com/alicebob/miniredis/v2/cmd_connection.go
generated
vendored
284
vendor/github.com/alicebob/miniredis/v2/cmd_connection.go
generated
vendored
@@ -1,284 +0,0 @@
|
||||
// Commands from https://redis.io/commands#connection
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
func commandsConnection(m *Miniredis) {
|
||||
m.srv.Register("AUTH", m.cmdAuth)
|
||||
m.srv.Register("ECHO", m.cmdEcho)
|
||||
m.srv.Register("HELLO", m.cmdHello)
|
||||
m.srv.Register("PING", m.cmdPing)
|
||||
m.srv.Register("QUIT", m.cmdQuit)
|
||||
m.srv.Register("SELECT", m.cmdSelect)
|
||||
m.srv.Register("SWAPDB", m.cmdSwapdb)
|
||||
}
|
||||
|
||||
// PING
|
||||
func (m *Miniredis) cmdPing(c *server.Peer, cmd string, args []string) {
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(args) > 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
payload := ""
|
||||
if len(args) > 0 {
|
||||
payload = args[0]
|
||||
}
|
||||
|
||||
// PING is allowed in subscribed state
|
||||
if sub := getCtx(c).subscriber; sub != nil {
|
||||
c.Block(func(c *server.Writer) {
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk("pong")
|
||||
c.WriteBulk(payload)
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
if payload == "" {
|
||||
c.WriteInline("PONG")
|
||||
return
|
||||
}
|
||||
c.WriteBulk(payload)
|
||||
})
|
||||
}
|
||||
|
||||
// AUTH
|
||||
func (m *Miniredis) cmdAuth(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
if len(args) > 2 {
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
var opts = struct {
|
||||
username string
|
||||
password string
|
||||
}{
|
||||
username: "default",
|
||||
password: args[0],
|
||||
}
|
||||
if len(args) == 2 {
|
||||
opts.username, opts.password = args[0], args[1]
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
if len(m.passwords) == 0 && opts.username == "default" {
|
||||
c.WriteError("ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?")
|
||||
return
|
||||
}
|
||||
setPW, ok := m.passwords[opts.username]
|
||||
if !ok {
|
||||
c.WriteError("WRONGPASS invalid username-password pair")
|
||||
return
|
||||
}
|
||||
if setPW != opts.password {
|
||||
c.WriteError("WRONGPASS invalid username-password pair")
|
||||
return
|
||||
}
|
||||
|
||||
ctx.authenticated = true
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// HELLO
|
||||
func (m *Miniredis) cmdHello(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
version int
|
||||
username string
|
||||
password string
|
||||
}
|
||||
|
||||
if ok := optIntErr(c, args[0], &opts.version, "ERR Protocol version is not an integer or out of range"); !ok {
|
||||
return
|
||||
}
|
||||
args = args[1:]
|
||||
|
||||
switch opts.version {
|
||||
case 2, 3:
|
||||
default:
|
||||
c.WriteError("NOPROTO unsupported protocol version")
|
||||
return
|
||||
}
|
||||
|
||||
var checkAuth bool
|
||||
for len(args) > 0 {
|
||||
switch strings.ToUpper(args[0]) {
|
||||
case "AUTH":
|
||||
if len(args) < 3 {
|
||||
c.WriteError(fmt.Sprintf("ERR Syntax error in HELLO option '%s'", args[0]))
|
||||
return
|
||||
}
|
||||
opts.username, opts.password, args = args[1], args[2], args[3:]
|
||||
checkAuth = true
|
||||
case "SETNAME":
|
||||
if len(args) < 2 {
|
||||
c.WriteError(fmt.Sprintf("ERR Syntax error in HELLO option '%s'", args[0]))
|
||||
return
|
||||
}
|
||||
_, args = args[1], args[2:]
|
||||
default:
|
||||
c.WriteError(fmt.Sprintf("ERR Syntax error in HELLO option '%s'", args[0]))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if len(m.passwords) == 0 && opts.username == "default" {
|
||||
// redis ignores legacy "AUTH" if it's not enabled.
|
||||
checkAuth = false
|
||||
}
|
||||
if checkAuth {
|
||||
setPW, ok := m.passwords[opts.username]
|
||||
if !ok {
|
||||
c.WriteError("WRONGPASS invalid username-password pair")
|
||||
return
|
||||
}
|
||||
if setPW != opts.password {
|
||||
c.WriteError("WRONGPASS invalid username-password pair")
|
||||
return
|
||||
}
|
||||
getCtx(c).authenticated = true
|
||||
}
|
||||
|
||||
c.Resp3 = opts.version == 3
|
||||
|
||||
c.WriteMapLen(7)
|
||||
c.WriteBulk("server")
|
||||
c.WriteBulk("miniredis")
|
||||
c.WriteBulk("version")
|
||||
c.WriteBulk("6.0.5")
|
||||
c.WriteBulk("proto")
|
||||
c.WriteInt(opts.version)
|
||||
c.WriteBulk("id")
|
||||
c.WriteInt(42)
|
||||
c.WriteBulk("mode")
|
||||
c.WriteBulk("standalone")
|
||||
c.WriteBulk("role")
|
||||
c.WriteBulk("master")
|
||||
c.WriteBulk("modules")
|
||||
c.WriteLen(0)
|
||||
}
|
||||
|
||||
// ECHO
|
||||
func (m *Miniredis) cmdEcho(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
msg := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
c.WriteBulk(msg)
|
||||
})
|
||||
}
|
||||
|
||||
// SELECT
|
||||
func (m *Miniredis) cmdSelect(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.isValidCMD(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
id int
|
||||
}
|
||||
if ok := optInt(c, args[0], &opts.id); !ok {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
if opts.id < 0 {
|
||||
c.WriteError(msgDBIndexOutOfRange)
|
||||
setDirty(c)
|
||||
return
|
||||
}
|
||||
|
||||
ctx.selectedDB = opts.id
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// SWAPDB
|
||||
func (m *Miniredis) cmdSwapdb(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
id1 int
|
||||
id2 int
|
||||
}
|
||||
|
||||
if ok := optIntErr(c, args[0], &opts.id1, "ERR invalid first DB index"); !ok {
|
||||
return
|
||||
}
|
||||
if ok := optIntErr(c, args[1], &opts.id2, "ERR invalid second DB index"); !ok {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
if opts.id1 < 0 || opts.id2 < 0 {
|
||||
c.WriteError(msgDBIndexOutOfRange)
|
||||
setDirty(c)
|
||||
return
|
||||
}
|
||||
|
||||
m.swapDB(opts.id1, opts.id2)
|
||||
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// QUIT
|
||||
func (m *Miniredis) cmdQuit(c *server.Peer, cmd string, args []string) {
|
||||
// QUIT isn't transactionfied and accepts any arguments.
|
||||
c.WriteOK()
|
||||
c.Close()
|
||||
}
|
||||
669
vendor/github.com/alicebob/miniredis/v2/cmd_generic.go
generated
vendored
669
vendor/github.com/alicebob/miniredis/v2/cmd_generic.go
generated
vendored
@@ -1,669 +0,0 @@
|
||||
// Commands from https://redis.io/commands#generic
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// commandsGeneric handles EXPIRE, TTL, PERSIST, &c.
|
||||
func commandsGeneric(m *Miniredis) {
|
||||
m.srv.Register("COPY", m.cmdCopy)
|
||||
m.srv.Register("DEL", m.cmdDel)
|
||||
// DUMP
|
||||
m.srv.Register("EXISTS", m.cmdExists)
|
||||
m.srv.Register("EXPIRE", makeCmdExpire(m, false, time.Second))
|
||||
m.srv.Register("EXPIREAT", makeCmdExpire(m, true, time.Second))
|
||||
m.srv.Register("KEYS", m.cmdKeys)
|
||||
// MIGRATE
|
||||
m.srv.Register("MOVE", m.cmdMove)
|
||||
// OBJECT
|
||||
m.srv.Register("PERSIST", m.cmdPersist)
|
||||
m.srv.Register("PEXPIRE", makeCmdExpire(m, false, time.Millisecond))
|
||||
m.srv.Register("PEXPIREAT", makeCmdExpire(m, true, time.Millisecond))
|
||||
m.srv.Register("PTTL", m.cmdPTTL)
|
||||
m.srv.Register("RANDOMKEY", m.cmdRandomkey)
|
||||
m.srv.Register("RENAME", m.cmdRename)
|
||||
m.srv.Register("RENAMENX", m.cmdRenamenx)
|
||||
// RESTORE
|
||||
m.srv.Register("TOUCH", m.cmdTouch)
|
||||
m.srv.Register("TTL", m.cmdTTL)
|
||||
m.srv.Register("TYPE", m.cmdType)
|
||||
m.srv.Register("SCAN", m.cmdScan)
|
||||
// SORT
|
||||
m.srv.Register("UNLINK", m.cmdDel)
|
||||
}
|
||||
|
||||
// generic expire command for EXPIRE, PEXPIRE, EXPIREAT, PEXPIREAT
|
||||
// d is the time unit. If unix is set it'll be seen as a unixtimestamp and
|
||||
// converted to a duration.
|
||||
func makeCmdExpire(m *Miniredis, unix bool, d time.Duration) func(*server.Peer, string, []string) {
|
||||
return func(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
key string
|
||||
value int
|
||||
}
|
||||
opts.key = args[0]
|
||||
if ok := optInt(c, args[1], &opts.value); !ok {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
// Key must be present.
|
||||
if _, ok := db.keys[opts.key]; !ok {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if unix {
|
||||
db.ttl[opts.key] = m.at(opts.value, d)
|
||||
} else {
|
||||
db.ttl[opts.key] = time.Duration(opts.value) * d
|
||||
}
|
||||
db.keyVersion[opts.key]++
|
||||
db.checkTTL(opts.key)
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TOUCH
|
||||
func (m *Miniredis) cmdTouch(c *server.Peer, cmd string, args []string) {
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
count := 0
|
||||
for _, key := range args {
|
||||
if db.exists(key) {
|
||||
count++
|
||||
}
|
||||
}
|
||||
c.WriteInt(count)
|
||||
})
|
||||
}
|
||||
|
||||
// TTL
|
||||
func (m *Miniredis) cmdTTL(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if _, ok := db.keys[key]; !ok {
|
||||
// No such key
|
||||
c.WriteInt(-2)
|
||||
return
|
||||
}
|
||||
|
||||
v, ok := db.ttl[key]
|
||||
if !ok {
|
||||
// no expire value
|
||||
c.WriteInt(-1)
|
||||
return
|
||||
}
|
||||
c.WriteInt(int(v.Seconds()))
|
||||
})
|
||||
}
|
||||
|
||||
// PTTL
|
||||
func (m *Miniredis) cmdPTTL(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if _, ok := db.keys[key]; !ok {
|
||||
// no such key
|
||||
c.WriteInt(-2)
|
||||
return
|
||||
}
|
||||
|
||||
v, ok := db.ttl[key]
|
||||
if !ok {
|
||||
// no expire value
|
||||
c.WriteInt(-1)
|
||||
return
|
||||
}
|
||||
c.WriteInt(int(v.Nanoseconds() / 1000000))
|
||||
})
|
||||
}
|
||||
|
||||
// PERSIST
|
||||
func (m *Miniredis) cmdPersist(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if _, ok := db.keys[key]; !ok {
|
||||
// no such key
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
if _, ok := db.ttl[key]; !ok {
|
||||
// no expire value
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
delete(db.ttl, key)
|
||||
db.keyVersion[key]++
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
|
||||
// DEL and UNLINK
|
||||
func (m *Miniredis) cmdDel(c *server.Peer, cmd string, args []string) {
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
count := 0
|
||||
for _, key := range args {
|
||||
if db.exists(key) {
|
||||
count++
|
||||
}
|
||||
db.del(key, true) // delete expire
|
||||
}
|
||||
c.WriteInt(count)
|
||||
})
|
||||
}
|
||||
|
||||
// TYPE
|
||||
func (m *Miniredis) cmdType(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError("usage error")
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
c.WriteInline("none")
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteInline(t)
|
||||
})
|
||||
}
|
||||
|
||||
// EXISTS
|
||||
func (m *Miniredis) cmdExists(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
found := 0
|
||||
for _, k := range args {
|
||||
if db.exists(k) {
|
||||
found++
|
||||
}
|
||||
}
|
||||
c.WriteInt(found)
|
||||
})
|
||||
}
|
||||
|
||||
// MOVE
|
||||
func (m *Miniredis) cmdMove(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
key string
|
||||
targetDB int
|
||||
}
|
||||
|
||||
opts.key = args[0]
|
||||
opts.targetDB, _ = strconv.Atoi(args[1])
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
if ctx.selectedDB == opts.targetDB {
|
||||
c.WriteError("ERR source and destination objects are the same")
|
||||
return
|
||||
}
|
||||
db := m.db(ctx.selectedDB)
|
||||
targetDB := m.db(opts.targetDB)
|
||||
|
||||
if !db.move(opts.key, targetDB) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
|
||||
// KEYS
|
||||
func (m *Miniredis) cmdKeys(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
keys, _ := matchKeys(db.allKeys(), key)
|
||||
c.WriteLen(len(keys))
|
||||
for _, s := range keys {
|
||||
c.WriteBulk(s)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// RANDOMKEY
|
||||
func (m *Miniredis) cmdRandomkey(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if len(db.keys) == 0 {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
nr := m.randIntn(len(db.keys))
|
||||
for k := range db.keys {
|
||||
if nr == 0 {
|
||||
c.WriteBulk(k)
|
||||
return
|
||||
}
|
||||
nr--
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// RENAME
|
||||
func (m *Miniredis) cmdRename(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
from string
|
||||
to string
|
||||
}{
|
||||
from: args[0],
|
||||
to: args[1],
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.from) {
|
||||
c.WriteError(msgKeyNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
db.rename(opts.from, opts.to)
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// RENAMENX
|
||||
func (m *Miniredis) cmdRenamenx(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
from string
|
||||
to string
|
||||
}{
|
||||
from: args[0],
|
||||
to: args[1],
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.from) {
|
||||
c.WriteError(msgKeyNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
if db.exists(opts.to) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
db.rename(opts.from, opts.to)
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
|
||||
// SCAN
|
||||
func (m *Miniredis) cmdScan(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
cursor int
|
||||
withMatch bool
|
||||
match string
|
||||
withType bool
|
||||
_type string
|
||||
}
|
||||
|
||||
if ok := optIntErr(c, args[0], &opts.cursor, msgInvalidCursor); !ok {
|
||||
return
|
||||
}
|
||||
args = args[1:]
|
||||
|
||||
// MATCH, COUNT and TYPE options
|
||||
for len(args) > 0 {
|
||||
if strings.ToLower(args[0]) == "count" {
|
||||
// we do nothing with count
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
if _, err := strconv.Atoi(args[1]); err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
args = args[2:]
|
||||
continue
|
||||
}
|
||||
if strings.ToLower(args[0]) == "match" {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
opts.withMatch = true
|
||||
opts.match, args = args[1], args[2:]
|
||||
continue
|
||||
}
|
||||
if strings.ToLower(args[0]) == "type" {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
opts.withType = true
|
||||
opts._type, args = strings.ToLower(args[1]), args[2:]
|
||||
continue
|
||||
}
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
// We return _all_ (matched) keys every time.
|
||||
|
||||
if opts.cursor != 0 {
|
||||
// Invalid cursor.
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk("0") // no next cursor
|
||||
c.WriteLen(0) // no elements
|
||||
return
|
||||
}
|
||||
|
||||
var keys []string
|
||||
|
||||
if opts.withType {
|
||||
keys = make([]string, 0)
|
||||
for k, t := range db.keys {
|
||||
// type must be given exactly; no pattern matching is performed
|
||||
if t == opts._type {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
}
|
||||
sort.Strings(keys) // To make things deterministic.
|
||||
} else {
|
||||
keys = db.allKeys()
|
||||
}
|
||||
|
||||
if opts.withMatch {
|
||||
keys, _ = matchKeys(keys, opts.match)
|
||||
}
|
||||
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk("0") // no next cursor
|
||||
c.WriteLen(len(keys))
|
||||
for _, k := range keys {
|
||||
c.WriteBulk(k)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// COPY
|
||||
func (m *Miniredis) cmdCopy(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts = struct {
|
||||
from string
|
||||
to string
|
||||
destinationDB int
|
||||
replace bool
|
||||
}{
|
||||
destinationDB: -1,
|
||||
}
|
||||
|
||||
opts.from, opts.to, args = args[0], args[1], args[2:]
|
||||
for len(args) > 0 {
|
||||
switch strings.ToLower(args[0]) {
|
||||
case "db":
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
db, err := strconv.Atoi(args[1])
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
if db < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgDBIndexOutOfRange)
|
||||
return
|
||||
}
|
||||
opts.destinationDB = db
|
||||
args = args[2:]
|
||||
case "replace":
|
||||
opts.replace = true
|
||||
args = args[1:]
|
||||
default:
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
fromDB, toDB := ctx.selectedDB, opts.destinationDB
|
||||
if toDB == -1 {
|
||||
toDB = fromDB
|
||||
}
|
||||
|
||||
if fromDB == toDB && opts.from == opts.to {
|
||||
c.WriteError("ERR source and destination objects are the same")
|
||||
return
|
||||
}
|
||||
|
||||
if !m.db(fromDB).exists(opts.from) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
if !opts.replace {
|
||||
if m.db(toDB).exists(opts.to) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
m.copy(m.db(fromDB), opts.from, m.db(toDB), opts.to)
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
609
vendor/github.com/alicebob/miniredis/v2/cmd_geo.go
generated
vendored
609
vendor/github.com/alicebob/miniredis/v2/cmd_geo.go
generated
vendored
@@ -1,609 +0,0 @@
|
||||
// Commands from https://redis.io/commands#geo
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// commandsGeo handles GEOADD, GEORADIUS etc.
|
||||
func commandsGeo(m *Miniredis) {
|
||||
m.srv.Register("GEOADD", m.cmdGeoadd)
|
||||
m.srv.Register("GEODIST", m.cmdGeodist)
|
||||
m.srv.Register("GEOPOS", m.cmdGeopos)
|
||||
m.srv.Register("GEORADIUS", m.cmdGeoradius)
|
||||
m.srv.Register("GEORADIUS_RO", m.cmdGeoradius)
|
||||
m.srv.Register("GEORADIUSBYMEMBER", m.cmdGeoradiusbymember)
|
||||
m.srv.Register("GEORADIUSBYMEMBER_RO", m.cmdGeoradiusbymember)
|
||||
}
|
||||
|
||||
// GEOADD
|
||||
func (m *Miniredis) cmdGeoadd(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 3 || len(args[1:])%3 != 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
key, args := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if db.exists(key) && db.t(key) != "zset" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
toSet := map[string]float64{}
|
||||
for len(args) > 2 {
|
||||
rawLong, rawLat, name := args[0], args[1], args[2]
|
||||
args = args[3:]
|
||||
longitude, err := strconv.ParseFloat(rawLong, 64)
|
||||
if err != nil {
|
||||
c.WriteError("ERR value is not a valid float")
|
||||
return
|
||||
}
|
||||
latitude, err := strconv.ParseFloat(rawLat, 64)
|
||||
if err != nil {
|
||||
c.WriteError("ERR value is not a valid float")
|
||||
return
|
||||
}
|
||||
|
||||
if latitude < -85.05112878 ||
|
||||
latitude > 85.05112878 ||
|
||||
longitude < -180 ||
|
||||
longitude > 180 {
|
||||
c.WriteError(fmt.Sprintf("ERR invalid longitude,latitude pair %.6f,%.6f", longitude, latitude))
|
||||
return
|
||||
}
|
||||
|
||||
toSet[name] = float64(toGeohash(longitude, latitude))
|
||||
}
|
||||
|
||||
set := 0
|
||||
for name, score := range toSet {
|
||||
if db.ssetAdd(key, score, name) {
|
||||
set++
|
||||
}
|
||||
}
|
||||
c.WriteInt(set)
|
||||
})
|
||||
}
|
||||
|
||||
// GEODIST
|
||||
func (m *Miniredis) cmdGeodist(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, from, to, args := args[0], args[1], args[2], args[3:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
if !db.exists(key) {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if db.t(key) != "zset" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
unit := "m"
|
||||
if len(args) > 0 {
|
||||
unit, args = args[0], args[1:]
|
||||
}
|
||||
if len(args) > 0 {
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
|
||||
toMeter := parseUnit(unit)
|
||||
if toMeter == 0 {
|
||||
c.WriteError(msgUnsupportedUnit)
|
||||
return
|
||||
}
|
||||
|
||||
members := db.sortedsetKeys[key]
|
||||
fromD, okFrom := members.get(from)
|
||||
toD, okTo := members.get(to)
|
||||
if !okFrom || !okTo {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
|
||||
fromLo, fromLat := fromGeohash(uint64(fromD))
|
||||
toLo, toLat := fromGeohash(uint64(toD))
|
||||
|
||||
dist := distance(fromLat, fromLo, toLat, toLo) / toMeter
|
||||
c.WriteBulk(fmt.Sprintf("%.4f", dist))
|
||||
})
|
||||
}
|
||||
|
||||
// GEOPOS
|
||||
func (m *Miniredis) cmdGeopos(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
key, args := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if db.exists(key) && db.t(key) != "zset" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteLen(len(args))
|
||||
for _, l := range args {
|
||||
if !db.ssetExists(key, l) {
|
||||
c.WriteLen(-1)
|
||||
continue
|
||||
}
|
||||
score := db.ssetScore(key, l)
|
||||
c.WriteLen(2)
|
||||
long, lat := fromGeohash(uint64(score))
|
||||
c.WriteBulk(fmt.Sprintf("%f", long))
|
||||
c.WriteBulk(fmt.Sprintf("%f", lat))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
type geoDistance struct {
|
||||
Name string
|
||||
Score float64
|
||||
Distance float64
|
||||
Longitude float64
|
||||
Latitude float64
|
||||
}
|
||||
|
||||
// GEORADIUS and GEORADIUS_RO
|
||||
func (m *Miniredis) cmdGeoradius(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 5 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
longitude, err := strconv.ParseFloat(args[1], 64)
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
latitude, err := strconv.ParseFloat(args[2], 64)
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
radius, err := strconv.ParseFloat(args[3], 64)
|
||||
if err != nil || radius < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
toMeter := parseUnit(args[4])
|
||||
if toMeter == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
args = args[5:]
|
||||
|
||||
var opts struct {
|
||||
withDist bool
|
||||
withCoord bool
|
||||
direction direction // unsorted
|
||||
count int
|
||||
withStore bool
|
||||
storeKey string
|
||||
withStoredist bool
|
||||
storedistKey string
|
||||
}
|
||||
for len(args) > 0 {
|
||||
arg := args[0]
|
||||
args = args[1:]
|
||||
switch strings.ToUpper(arg) {
|
||||
case "WITHCOORD":
|
||||
opts.withCoord = true
|
||||
case "WITHDIST":
|
||||
opts.withDist = true
|
||||
case "ASC":
|
||||
opts.direction = asc
|
||||
case "DESC":
|
||||
opts.direction = desc
|
||||
case "COUNT":
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
n, err := strconv.Atoi(args[0])
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
if n <= 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR COUNT must be > 0")
|
||||
return
|
||||
}
|
||||
args = args[1:]
|
||||
opts.count = n
|
||||
case "STORE":
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
opts.withStore = true
|
||||
opts.storeKey = args[0]
|
||||
args = args[1:]
|
||||
case "STOREDIST":
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
opts.withStoredist = true
|
||||
opts.storedistKey = args[0]
|
||||
args = args[1:]
|
||||
default:
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if strings.ToUpper(cmd) == "GEORADIUS_RO" && (opts.withStore || opts.withStoredist) {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
if (opts.withStore || opts.withStoredist) && (opts.withDist || opts.withCoord) {
|
||||
c.WriteError("ERR STORE option in GEORADIUS is not compatible with WITHDIST, WITHHASH and WITHCOORDS options")
|
||||
return
|
||||
}
|
||||
|
||||
db := m.db(ctx.selectedDB)
|
||||
members := db.ssetElements(key)
|
||||
|
||||
matches := withinRadius(members, longitude, latitude, radius*toMeter)
|
||||
|
||||
// deal with ASC/DESC
|
||||
if opts.direction != unsorted {
|
||||
sort.Slice(matches, func(i, j int) bool {
|
||||
if opts.direction == desc {
|
||||
return matches[i].Distance > matches[j].Distance
|
||||
}
|
||||
return matches[i].Distance < matches[j].Distance
|
||||
})
|
||||
}
|
||||
|
||||
// deal with COUNT
|
||||
if opts.count > 0 && len(matches) > opts.count {
|
||||
matches = matches[:opts.count]
|
||||
}
|
||||
|
||||
// deal with "STORE x"
|
||||
if opts.withStore {
|
||||
db.del(opts.storeKey, true)
|
||||
for _, member := range matches {
|
||||
db.ssetAdd(opts.storeKey, member.Score, member.Name)
|
||||
}
|
||||
c.WriteInt(len(matches))
|
||||
return
|
||||
}
|
||||
|
||||
// deal with "STOREDIST x"
|
||||
if opts.withStoredist {
|
||||
db.del(opts.storedistKey, true)
|
||||
for _, member := range matches {
|
||||
db.ssetAdd(opts.storedistKey, member.Distance/toMeter, member.Name)
|
||||
}
|
||||
c.WriteInt(len(matches))
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteLen(len(matches))
|
||||
for _, member := range matches {
|
||||
if !opts.withDist && !opts.withCoord {
|
||||
c.WriteBulk(member.Name)
|
||||
continue
|
||||
}
|
||||
|
||||
len := 1
|
||||
if opts.withDist {
|
||||
len++
|
||||
}
|
||||
if opts.withCoord {
|
||||
len++
|
||||
}
|
||||
c.WriteLen(len)
|
||||
c.WriteBulk(member.Name)
|
||||
if opts.withDist {
|
||||
c.WriteBulk(fmt.Sprintf("%.4f", member.Distance/toMeter))
|
||||
}
|
||||
if opts.withCoord {
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk(fmt.Sprintf("%f", member.Longitude))
|
||||
c.WriteBulk(fmt.Sprintf("%f", member.Latitude))
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// GEORADIUSBYMEMBER and GEORADIUSBYMEMBER_RO
|
||||
func (m *Miniredis) cmdGeoradiusbymember(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 4 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
member string
|
||||
radius float64
|
||||
toMeter float64
|
||||
|
||||
withDist bool
|
||||
withCoord bool
|
||||
direction direction // unsorted
|
||||
count int
|
||||
withStore bool
|
||||
storeKey string
|
||||
withStoredist bool
|
||||
storedistKey string
|
||||
}{
|
||||
key: args[0],
|
||||
member: args[1],
|
||||
}
|
||||
|
||||
r, err := strconv.ParseFloat(args[2], 64)
|
||||
if err != nil || r < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
opts.radius = r
|
||||
|
||||
opts.toMeter = parseUnit(args[3])
|
||||
if opts.toMeter == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
args = args[4:]
|
||||
|
||||
for len(args) > 0 {
|
||||
arg := args[0]
|
||||
args = args[1:]
|
||||
switch strings.ToUpper(arg) {
|
||||
case "WITHCOORD":
|
||||
opts.withCoord = true
|
||||
case "WITHDIST":
|
||||
opts.withDist = true
|
||||
case "ASC":
|
||||
opts.direction = asc
|
||||
case "DESC":
|
||||
opts.direction = desc
|
||||
case "COUNT":
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
n, err := strconv.Atoi(args[0])
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
if n <= 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR COUNT must be > 0")
|
||||
return
|
||||
}
|
||||
args = args[1:]
|
||||
opts.count = n
|
||||
case "STORE":
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
opts.withStore = true
|
||||
opts.storeKey = args[0]
|
||||
args = args[1:]
|
||||
case "STOREDIST":
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
opts.withStoredist = true
|
||||
opts.storedistKey = args[0]
|
||||
args = args[1:]
|
||||
default:
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if strings.ToUpper(cmd) == "GEORADIUSBYMEMBER_RO" && (opts.withStore || opts.withStoredist) {
|
||||
setDirty(c)
|
||||
c.WriteError("ERR syntax error")
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
if (opts.withStore || opts.withStoredist) && (opts.withDist || opts.withCoord) {
|
||||
c.WriteError("ERR STORE option in GEORADIUS is not compatible with WITHDIST, WITHHASH and WITHCOORDS options")
|
||||
return
|
||||
}
|
||||
|
||||
db := m.db(ctx.selectedDB)
|
||||
if !db.exists(opts.key) {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(opts.key) != "zset" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
// get position of member
|
||||
if !db.ssetExists(opts.key, opts.member) {
|
||||
c.WriteError("ERR could not decode requested zset member")
|
||||
return
|
||||
}
|
||||
score := db.ssetScore(opts.key, opts.member)
|
||||
longitude, latitude := fromGeohash(uint64(score))
|
||||
|
||||
members := db.ssetElements(opts.key)
|
||||
matches := withinRadius(members, longitude, latitude, opts.radius*opts.toMeter)
|
||||
|
||||
// deal with ASC/DESC
|
||||
if opts.direction != unsorted {
|
||||
sort.Slice(matches, func(i, j int) bool {
|
||||
if opts.direction == desc {
|
||||
return matches[i].Distance > matches[j].Distance
|
||||
}
|
||||
return matches[i].Distance < matches[j].Distance
|
||||
})
|
||||
}
|
||||
|
||||
// deal with COUNT
|
||||
if opts.count > 0 && len(matches) > opts.count {
|
||||
matches = matches[:opts.count]
|
||||
}
|
||||
|
||||
// deal with "STORE x"
|
||||
if opts.withStore {
|
||||
db.del(opts.storeKey, true)
|
||||
for _, member := range matches {
|
||||
db.ssetAdd(opts.storeKey, member.Score, member.Name)
|
||||
}
|
||||
c.WriteInt(len(matches))
|
||||
return
|
||||
}
|
||||
|
||||
// deal with "STOREDIST x"
|
||||
if opts.withStoredist {
|
||||
db.del(opts.storedistKey, true)
|
||||
for _, member := range matches {
|
||||
db.ssetAdd(opts.storedistKey, member.Distance/opts.toMeter, member.Name)
|
||||
}
|
||||
c.WriteInt(len(matches))
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteLen(len(matches))
|
||||
for _, member := range matches {
|
||||
if !opts.withDist && !opts.withCoord {
|
||||
c.WriteBulk(member.Name)
|
||||
continue
|
||||
}
|
||||
|
||||
len := 1
|
||||
if opts.withDist {
|
||||
len++
|
||||
}
|
||||
if opts.withCoord {
|
||||
len++
|
||||
}
|
||||
c.WriteLen(len)
|
||||
c.WriteBulk(member.Name)
|
||||
if opts.withDist {
|
||||
c.WriteBulk(fmt.Sprintf("%.4f", member.Distance/opts.toMeter))
|
||||
}
|
||||
if opts.withCoord {
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk(fmt.Sprintf("%f", member.Longitude))
|
||||
c.WriteBulk(fmt.Sprintf("%f", member.Latitude))
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func withinRadius(members []ssElem, longitude, latitude, radius float64) []geoDistance {
|
||||
matches := []geoDistance{}
|
||||
for _, el := range members {
|
||||
elLo, elLat := fromGeohash(uint64(el.score))
|
||||
distanceInMeter := distance(latitude, longitude, elLat, elLo)
|
||||
|
||||
if distanceInMeter <= radius {
|
||||
matches = append(matches, geoDistance{
|
||||
Name: el.member,
|
||||
Score: el.score,
|
||||
Distance: distanceInMeter,
|
||||
Longitude: elLo,
|
||||
Latitude: elLat,
|
||||
})
|
||||
}
|
||||
}
|
||||
return matches
|
||||
}
|
||||
|
||||
func parseUnit(u string) float64 {
|
||||
switch u {
|
||||
case "m":
|
||||
return 1
|
||||
case "km":
|
||||
return 1000
|
||||
case "mi":
|
||||
return 1609.34
|
||||
case "ft":
|
||||
return 0.3048
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
683
vendor/github.com/alicebob/miniredis/v2/cmd_hash.go
generated
vendored
683
vendor/github.com/alicebob/miniredis/v2/cmd_hash.go
generated
vendored
@@ -1,683 +0,0 @@
|
||||
// Commands from https://redis.io/commands#hash
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// commandsHash handles all hash value operations.
|
||||
func commandsHash(m *Miniredis) {
|
||||
m.srv.Register("HDEL", m.cmdHdel)
|
||||
m.srv.Register("HEXISTS", m.cmdHexists)
|
||||
m.srv.Register("HGET", m.cmdHget)
|
||||
m.srv.Register("HGETALL", m.cmdHgetall)
|
||||
m.srv.Register("HINCRBY", m.cmdHincrby)
|
||||
m.srv.Register("HINCRBYFLOAT", m.cmdHincrbyfloat)
|
||||
m.srv.Register("HKEYS", m.cmdHkeys)
|
||||
m.srv.Register("HLEN", m.cmdHlen)
|
||||
m.srv.Register("HMGET", m.cmdHmget)
|
||||
m.srv.Register("HMSET", m.cmdHmset)
|
||||
m.srv.Register("HSET", m.cmdHset)
|
||||
m.srv.Register("HSETNX", m.cmdHsetnx)
|
||||
m.srv.Register("HSTRLEN", m.cmdHstrlen)
|
||||
m.srv.Register("HVALS", m.cmdHvals)
|
||||
m.srv.Register("HSCAN", m.cmdHscan)
|
||||
}
|
||||
|
||||
// HSET
|
||||
func (m *Miniredis) cmdHset(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, pairs := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if len(pairs)%2 == 1 {
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
if t, ok := db.keys[key]; ok && t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
new := db.hashSet(key, pairs...)
|
||||
c.WriteInt(new)
|
||||
})
|
||||
}
|
||||
|
||||
// HSETNX
|
||||
func (m *Miniredis) cmdHsetnx(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
field string
|
||||
value string
|
||||
}{
|
||||
key: args[0],
|
||||
field: args[1],
|
||||
value: args[2],
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if t, ok := db.keys[opts.key]; ok && t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
if _, ok := db.hashKeys[opts.key]; !ok {
|
||||
db.hashKeys[opts.key] = map[string]string{}
|
||||
db.keys[opts.key] = "hash"
|
||||
}
|
||||
_, ok := db.hashKeys[opts.key][opts.field]
|
||||
if ok {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
db.hashKeys[opts.key][opts.field] = opts.value
|
||||
db.keyVersion[opts.key]++
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
|
||||
// HMSET
|
||||
func (m *Miniredis) cmdHmset(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, args := args[0], args[1:]
|
||||
if len(args)%2 != 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if t, ok := db.keys[key]; ok && t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
for len(args) > 0 {
|
||||
field, value := args[0], args[1]
|
||||
args = args[2:]
|
||||
db.hashSet(key, field, value)
|
||||
}
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// HGET
|
||||
func (m *Miniredis) cmdHget(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, field := args[0], args[1]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
value, ok := db.hashKeys[key][field]
|
||||
if !ok {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
c.WriteBulk(value)
|
||||
})
|
||||
}
|
||||
|
||||
// HDEL
|
||||
func (m *Miniredis) cmdHdel(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
fields []string
|
||||
}{
|
||||
key: args[0],
|
||||
fields: args[1:],
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[opts.key]
|
||||
if !ok {
|
||||
// No key is zero deleted
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
deleted := 0
|
||||
for _, f := range opts.fields {
|
||||
_, ok := db.hashKeys[opts.key][f]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
delete(db.hashKeys[opts.key], f)
|
||||
deleted++
|
||||
}
|
||||
c.WriteInt(deleted)
|
||||
|
||||
// Nothing left. Remove the whole key.
|
||||
if len(db.hashKeys[opts.key]) == 0 {
|
||||
db.del(opts.key, true)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// HEXISTS
|
||||
func (m *Miniredis) cmdHexists(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
field string
|
||||
}{
|
||||
key: args[0],
|
||||
field: args[1],
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[opts.key]
|
||||
if !ok {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
if _, ok := db.hashKeys[opts.key][opts.field]; !ok {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
|
||||
// HGETALL
|
||||
func (m *Miniredis) cmdHgetall(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
c.WriteMapLen(0)
|
||||
return
|
||||
}
|
||||
if t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteMapLen(len(db.hashKeys[key]))
|
||||
for _, k := range db.hashFields(key) {
|
||||
c.WriteBulk(k)
|
||||
c.WriteBulk(db.hashGet(key, k))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// HKEYS
|
||||
func (m *Miniredis) cmdHkeys(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(key) {
|
||||
c.WriteLen(0)
|
||||
return
|
||||
}
|
||||
if db.t(key) != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
fields := db.hashFields(key)
|
||||
c.WriteLen(len(fields))
|
||||
for _, f := range fields {
|
||||
c.WriteBulk(f)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// HSTRLEN
|
||||
func (m *Miniredis) cmdHstrlen(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
hash, key := args[0], args[1]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[hash]
|
||||
if !ok {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
keys := db.hashKeys[hash]
|
||||
c.WriteInt(len(keys[key]))
|
||||
})
|
||||
}
|
||||
|
||||
// HVALS
|
||||
func (m *Miniredis) cmdHvals(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
c.WriteLen(0)
|
||||
return
|
||||
}
|
||||
if t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
vals := db.hashValues(key)
|
||||
c.WriteLen(len(vals))
|
||||
for _, v := range vals {
|
||||
c.WriteBulk(v)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// HLEN
|
||||
func (m *Miniredis) cmdHlen(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteInt(len(db.hashKeys[key]))
|
||||
})
|
||||
}
|
||||
|
||||
// HMGET
|
||||
func (m *Miniredis) cmdHmget(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if t, ok := db.keys[key]; ok && t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
f, ok := db.hashKeys[key]
|
||||
if !ok {
|
||||
f = map[string]string{}
|
||||
}
|
||||
|
||||
c.WriteLen(len(args) - 1)
|
||||
for _, k := range args[1:] {
|
||||
v, ok := f[k]
|
||||
if !ok {
|
||||
c.WriteNull()
|
||||
continue
|
||||
}
|
||||
c.WriteBulk(v)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// HINCRBY
|
||||
func (m *Miniredis) cmdHincrby(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
field string
|
||||
delta int
|
||||
}{
|
||||
key: args[0],
|
||||
field: args[1],
|
||||
}
|
||||
if ok := optInt(c, args[2], &opts.delta); !ok {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if t, ok := db.keys[opts.key]; ok && t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
v, err := db.hashIncr(opts.key, opts.field, opts.delta)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
c.WriteInt(v)
|
||||
})
|
||||
}
|
||||
|
||||
// HINCRBYFLOAT
|
||||
func (m *Miniredis) cmdHincrbyfloat(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
field string
|
||||
delta *big.Float
|
||||
}{
|
||||
key: args[0],
|
||||
field: args[1],
|
||||
}
|
||||
delta, _, err := big.ParseFloat(args[2], 10, 128, 0)
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidFloat)
|
||||
return
|
||||
}
|
||||
opts.delta = delta
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if t, ok := db.keys[opts.key]; ok && t != "hash" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
v, err := db.hashIncrfloat(opts.key, opts.field, opts.delta)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
c.WriteBulk(formatBig(v))
|
||||
})
|
||||
}
|
||||
|
||||
// HSCAN
|
||||
func (m *Miniredis) cmdHscan(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
cursor int
|
||||
withMatch bool
|
||||
match string
|
||||
}{
|
||||
key: args[0],
|
||||
}
|
||||
if ok := optIntErr(c, args[1], &opts.cursor, msgInvalidCursor); !ok {
|
||||
return
|
||||
}
|
||||
args = args[2:]
|
||||
|
||||
// MATCH and COUNT options
|
||||
for len(args) > 0 {
|
||||
if strings.ToLower(args[0]) == "count" {
|
||||
// we do nothing with count
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
_, err := strconv.Atoi(args[1])
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
args = args[2:]
|
||||
continue
|
||||
}
|
||||
if strings.ToLower(args[0]) == "match" {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
opts.withMatch = true
|
||||
opts.match, args = args[1], args[2:]
|
||||
continue
|
||||
}
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
// return _all_ (matched) keys every time
|
||||
|
||||
if opts.cursor != 0 {
|
||||
// Invalid cursor.
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk("0") // no next cursor
|
||||
c.WriteLen(0) // no elements
|
||||
return
|
||||
}
|
||||
if db.exists(opts.key) && db.t(opts.key) != "hash" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
members := db.hashFields(opts.key)
|
||||
if opts.withMatch {
|
||||
members, _ = matchKeys(members, opts.match)
|
||||
}
|
||||
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk("0") // no next cursor
|
||||
// HSCAN gives key, values.
|
||||
c.WriteLen(len(members) * 2)
|
||||
for _, k := range members {
|
||||
c.WriteBulk(k)
|
||||
c.WriteBulk(db.hashGet(opts.key, k))
|
||||
}
|
||||
})
|
||||
}
|
||||
95
vendor/github.com/alicebob/miniredis/v2/cmd_hll.go
generated
vendored
95
vendor/github.com/alicebob/miniredis/v2/cmd_hll.go
generated
vendored
@@ -1,95 +0,0 @@
|
||||
package miniredis
|
||||
|
||||
import "github.com/alicebob/miniredis/v2/server"
|
||||
|
||||
// commandsHll handles all hll related operations.
|
||||
func commandsHll(m *Miniredis) {
|
||||
m.srv.Register("PFADD", m.cmdPfadd)
|
||||
m.srv.Register("PFCOUNT", m.cmdPfcount)
|
||||
m.srv.Register("PFMERGE", m.cmdPfmerge)
|
||||
}
|
||||
|
||||
// PFADD
|
||||
func (m *Miniredis) cmdPfadd(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, items := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if db.exists(key) && db.t(key) != "hll" {
|
||||
c.WriteError(ErrNotValidHllValue.Error())
|
||||
return
|
||||
}
|
||||
|
||||
altered := db.hllAdd(key, items...)
|
||||
c.WriteInt(altered)
|
||||
})
|
||||
}
|
||||
|
||||
// PFCOUNT
|
||||
func (m *Miniredis) cmdPfcount(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
keys := args
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
count, err := db.hllCount(keys)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteInt(count)
|
||||
})
|
||||
}
|
||||
|
||||
// PFMERGE
|
||||
func (m *Miniredis) cmdPfmerge(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
keys := args
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if err := db.hllMerge(keys); err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
40
vendor/github.com/alicebob/miniredis/v2/cmd_info.go
generated
vendored
40
vendor/github.com/alicebob/miniredis/v2/cmd_info.go
generated
vendored
@@ -1,40 +0,0 @@
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// Command 'INFO' from https://redis.io/commands/info/
|
||||
func (m *Miniredis) cmdInfo(c *server.Peer, cmd string, args []string) {
|
||||
if !m.isValidCMD(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(args) > 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
const (
|
||||
clientsSectionName = "clients"
|
||||
clientsSectionContent = "# Clients\nconnected_clients:%d\r\n"
|
||||
)
|
||||
|
||||
var result string
|
||||
|
||||
for _, key := range args {
|
||||
if key != clientsSectionName {
|
||||
setDirty(c)
|
||||
c.WriteError(fmt.Sprintf("section (%s) is not supported", key))
|
||||
return
|
||||
}
|
||||
}
|
||||
result = fmt.Sprintf(clientsSectionContent, m.Server().ClientsLen())
|
||||
|
||||
c.WriteBulk(result)
|
||||
})
|
||||
}
|
||||
986
vendor/github.com/alicebob/miniredis/v2/cmd_list.go
generated
vendored
986
vendor/github.com/alicebob/miniredis/v2/cmd_list.go
generated
vendored
@@ -1,986 +0,0 @@
|
||||
// Commands from https://redis.io/commands#list
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
type leftright int
|
||||
|
||||
const (
|
||||
left leftright = iota
|
||||
right
|
||||
)
|
||||
|
||||
// commandsList handles list commands (mostly L*)
|
||||
func commandsList(m *Miniredis) {
|
||||
m.srv.Register("BLPOP", m.cmdBlpop)
|
||||
m.srv.Register("BRPOP", m.cmdBrpop)
|
||||
m.srv.Register("BRPOPLPUSH", m.cmdBrpoplpush)
|
||||
m.srv.Register("LINDEX", m.cmdLindex)
|
||||
m.srv.Register("LPOS", m.cmdLpos)
|
||||
m.srv.Register("LINSERT", m.cmdLinsert)
|
||||
m.srv.Register("LLEN", m.cmdLlen)
|
||||
m.srv.Register("LPOP", m.cmdLpop)
|
||||
m.srv.Register("LPUSH", m.cmdLpush)
|
||||
m.srv.Register("LPUSHX", m.cmdLpushx)
|
||||
m.srv.Register("LRANGE", m.cmdLrange)
|
||||
m.srv.Register("LREM", m.cmdLrem)
|
||||
m.srv.Register("LSET", m.cmdLset)
|
||||
m.srv.Register("LTRIM", m.cmdLtrim)
|
||||
m.srv.Register("RPOP", m.cmdRpop)
|
||||
m.srv.Register("RPOPLPUSH", m.cmdRpoplpush)
|
||||
m.srv.Register("RPUSH", m.cmdRpush)
|
||||
m.srv.Register("RPUSHX", m.cmdRpushx)
|
||||
m.srv.Register("LMOVE", m.cmdLmove)
|
||||
}
|
||||
|
||||
// BLPOP
|
||||
func (m *Miniredis) cmdBlpop(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdBXpop(c, cmd, args, left)
|
||||
}
|
||||
|
||||
// BRPOP
|
||||
func (m *Miniredis) cmdBrpop(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdBXpop(c, cmd, args, right)
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdBXpop(c *server.Peer, cmd string, args []string, lr leftright) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
timeoutS := args[len(args)-1]
|
||||
keys := args[:len(args)-1]
|
||||
|
||||
timeout, err := strconv.Atoi(timeoutS)
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidTimeout)
|
||||
return
|
||||
}
|
||||
if timeout < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgNegTimeout)
|
||||
return
|
||||
}
|
||||
|
||||
blocking(
|
||||
m,
|
||||
c,
|
||||
time.Duration(timeout)*time.Second,
|
||||
func(c *server.Peer, ctx *connCtx) bool {
|
||||
db := m.db(ctx.selectedDB)
|
||||
for _, key := range keys {
|
||||
if !db.exists(key) {
|
||||
continue
|
||||
}
|
||||
if db.t(key) != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return true
|
||||
}
|
||||
|
||||
if len(db.listKeys[key]) == 0 {
|
||||
continue
|
||||
}
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk(key)
|
||||
var v string
|
||||
switch lr {
|
||||
case left:
|
||||
v = db.listLpop(key)
|
||||
case right:
|
||||
v = db.listPop(key)
|
||||
}
|
||||
c.WriteBulk(v)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
},
|
||||
func(c *server.Peer) {
|
||||
// timeout
|
||||
c.WriteLen(-1)
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// LINDEX
|
||||
func (m *Miniredis) cmdLindex(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, offsets := args[0], args[1]
|
||||
|
||||
offset, err := strconv.Atoi(offsets)
|
||||
if err != nil || offsets == "-0" {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
// No such key
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if t != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
l := db.listKeys[key]
|
||||
if offset < 0 {
|
||||
offset = len(l) + offset
|
||||
}
|
||||
if offset < 0 || offset > len(l)-1 {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
c.WriteBulk(l[offset])
|
||||
})
|
||||
}
|
||||
|
||||
// LPOS key element [RANK rank] [COUNT num-matches] [MAXLEN len]
|
||||
func (m *Miniredis) cmdLpos(c *server.Peer, cmd string, args []string) {
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(args) == 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
// Extract options from arguments if present.
|
||||
//
|
||||
// Redis allows duplicate options and uses the last specified.
|
||||
// `LPOS key term RANK 1 RANK 2` is effectively the same as
|
||||
// `LPOS key term RANK 2`
|
||||
if len(args)%2 == 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
rank, count := 1, 1 // Default values
|
||||
var maxlen int // Default value is the list length (see below)
|
||||
var countSpecified, maxlenSpecified bool
|
||||
if len(args) > 2 {
|
||||
for i := 2; i < len(args); i++ {
|
||||
if i%2 == 0 {
|
||||
val := args[i+1]
|
||||
var err error
|
||||
switch strings.ToLower(args[i]) {
|
||||
case "rank":
|
||||
if rank, err = strconv.Atoi(val); err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
if rank == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgRankIsZero)
|
||||
return
|
||||
}
|
||||
case "count":
|
||||
countSpecified = true
|
||||
if count, err = strconv.Atoi(val); err != nil || count < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgCountIsNegative)
|
||||
return
|
||||
}
|
||||
case "maxlen":
|
||||
maxlenSpecified = true
|
||||
if maxlen, err = strconv.Atoi(val); err != nil || maxlen < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgMaxLengthIsNegative)
|
||||
return
|
||||
}
|
||||
default:
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
key, element := args[0], args[1]
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
// No such key
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if t != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
l := db.listKeys[key]
|
||||
|
||||
// RANK cannot be zero (see above).
|
||||
// If RANK is positive search forward (left to right).
|
||||
// If RANK is negative search backward (right to left).
|
||||
// Iterator returns true to continue iterating.
|
||||
iterate := func(iterator func(i int, e string) bool) {
|
||||
comparisons := len(l)
|
||||
// Only use max length if specified, not zero, and less than total length.
|
||||
// When max length is specified, but is zero, this means "unlimited".
|
||||
if maxlenSpecified && maxlen != 0 && maxlen < len(l) {
|
||||
comparisons = maxlen
|
||||
}
|
||||
if rank > 0 {
|
||||
for i := 0; i < comparisons; i++ {
|
||||
if resume := iterator(i, l[i]); !resume {
|
||||
return
|
||||
}
|
||||
}
|
||||
} else if rank < 0 {
|
||||
start := len(l) - 1
|
||||
end := len(l) - comparisons
|
||||
for i := start; i >= end; i-- {
|
||||
if resume := iterator(i, l[i]); !resume {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var currentRank, currentCount int
|
||||
vals := make([]int, 0, count)
|
||||
iterate(func(i int, e string) bool {
|
||||
if e == element {
|
||||
currentRank++
|
||||
// Only collect values only after surpassing the absolute value of rank.
|
||||
if rank > 0 && currentRank < rank {
|
||||
return true
|
||||
}
|
||||
if rank < 0 && currentRank < -rank {
|
||||
return true
|
||||
}
|
||||
vals = append(vals, i)
|
||||
currentCount++
|
||||
if currentCount == count {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
if !countSpecified && len(vals) == 0 {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if !countSpecified && len(vals) == 1 {
|
||||
c.WriteInt(vals[0])
|
||||
return
|
||||
}
|
||||
c.WriteLen(len(vals))
|
||||
for _, val := range vals {
|
||||
c.WriteInt(val)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// LINSERT
|
||||
func (m *Miniredis) cmdLinsert(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 4 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
where := 0
|
||||
switch strings.ToLower(args[1]) {
|
||||
case "before":
|
||||
where = -1
|
||||
case "after":
|
||||
where = +1
|
||||
default:
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
pivot := args[2]
|
||||
value := args[3]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
// No such key
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if t != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
l := db.listKeys[key]
|
||||
for i, el := range l {
|
||||
if el != pivot {
|
||||
continue
|
||||
}
|
||||
|
||||
if where < 0 {
|
||||
l = append(l[:i], append(listKey{value}, l[i:]...)...)
|
||||
} else {
|
||||
if i == len(l)-1 {
|
||||
l = append(l, value)
|
||||
} else {
|
||||
l = append(l[:i+1], append(listKey{value}, l[i+1:]...)...)
|
||||
}
|
||||
}
|
||||
db.listKeys[key] = l
|
||||
db.keyVersion[key]++
|
||||
c.WriteInt(len(l))
|
||||
return
|
||||
}
|
||||
c.WriteInt(-1)
|
||||
})
|
||||
}
|
||||
|
||||
// LLEN
|
||||
func (m *Miniredis) cmdLlen(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
// No such key. That's zero length.
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if t != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteInt(len(db.listKeys[key]))
|
||||
})
|
||||
}
|
||||
|
||||
// LPOP
|
||||
func (m *Miniredis) cmdLpop(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdXpop(c, cmd, args, left)
|
||||
}
|
||||
|
||||
// RPOP
|
||||
func (m *Miniredis) cmdRpop(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdXpop(c, cmd, args, right)
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdXpop(c *server.Peer, cmd string, args []string, lr leftright) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
key string
|
||||
withCount bool
|
||||
count int
|
||||
}
|
||||
|
||||
opts.key, args = args[0], args[1:]
|
||||
if len(args) > 0 {
|
||||
if ok := optInt(c, args[0], &opts.count); !ok {
|
||||
return
|
||||
}
|
||||
if opts.count < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgOutOfRange)
|
||||
return
|
||||
}
|
||||
opts.withCount = true
|
||||
args = args[1:]
|
||||
}
|
||||
if len(args) > 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.key) {
|
||||
// non-existing key is fine
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if db.t(opts.key) != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
if opts.withCount {
|
||||
var popped []string
|
||||
for opts.count > 0 && len(db.listKeys[opts.key]) > 0 {
|
||||
switch lr {
|
||||
case left:
|
||||
popped = append(popped, db.listLpop(opts.key))
|
||||
case right:
|
||||
popped = append(popped, db.listPop(opts.key))
|
||||
}
|
||||
opts.count -= 1
|
||||
}
|
||||
if len(popped) == 0 {
|
||||
c.WriteLen(-1)
|
||||
} else {
|
||||
c.WriteStrings(popped)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
var elem string
|
||||
switch lr {
|
||||
case left:
|
||||
elem = db.listLpop(opts.key)
|
||||
case right:
|
||||
elem = db.listPop(opts.key)
|
||||
}
|
||||
c.WriteBulk(elem)
|
||||
})
|
||||
}
|
||||
|
||||
// LPUSH
|
||||
func (m *Miniredis) cmdLpush(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdXpush(c, cmd, args, left)
|
||||
}
|
||||
|
||||
// RPUSH
|
||||
func (m *Miniredis) cmdRpush(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdXpush(c, cmd, args, right)
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdXpush(c *server.Peer, cmd string, args []string, lr leftright) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, args := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if db.exists(key) && db.t(key) != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
var newLen int
|
||||
for _, value := range args {
|
||||
switch lr {
|
||||
case left:
|
||||
newLen = db.listLpush(key, value)
|
||||
case right:
|
||||
newLen = db.listPush(key, value)
|
||||
}
|
||||
}
|
||||
c.WriteInt(newLen)
|
||||
})
|
||||
}
|
||||
|
||||
// LPUSHX
|
||||
func (m *Miniredis) cmdLpushx(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdXpushx(c, cmd, args, left)
|
||||
}
|
||||
|
||||
// RPUSHX
|
||||
func (m *Miniredis) cmdRpushx(c *server.Peer, cmd string, args []string) {
|
||||
m.cmdXpushx(c, cmd, args, right)
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdXpushx(c *server.Peer, cmd string, args []string, lr leftright) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, args := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(key) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if db.t(key) != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
var newLen int
|
||||
for _, value := range args {
|
||||
switch lr {
|
||||
case left:
|
||||
newLen = db.listLpush(key, value)
|
||||
case right:
|
||||
newLen = db.listPush(key, value)
|
||||
}
|
||||
}
|
||||
c.WriteInt(newLen)
|
||||
})
|
||||
}
|
||||
|
||||
// LRANGE
|
||||
func (m *Miniredis) cmdLrange(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
start int
|
||||
end int
|
||||
}{
|
||||
key: args[0],
|
||||
}
|
||||
if ok := optInt(c, args[1], &opts.start); !ok {
|
||||
return
|
||||
}
|
||||
if ok := optInt(c, args[2], &opts.end); !ok {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if t, ok := db.keys[opts.key]; ok && t != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
l := db.listKeys[opts.key]
|
||||
if len(l) == 0 {
|
||||
c.WriteLen(0)
|
||||
return
|
||||
}
|
||||
|
||||
rs, re := redisRange(len(l), opts.start, opts.end, false)
|
||||
c.WriteLen(re - rs)
|
||||
for _, el := range l[rs:re] {
|
||||
c.WriteBulk(el)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// LREM
|
||||
func (m *Miniredis) cmdLrem(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
key string
|
||||
count int
|
||||
value string
|
||||
}
|
||||
opts.key = args[0]
|
||||
if ok := optInt(c, args[1], &opts.count); !ok {
|
||||
return
|
||||
}
|
||||
opts.value = args[2]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.key) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
if db.t(opts.key) != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
l := db.listKeys[opts.key]
|
||||
if opts.count < 0 {
|
||||
reverseSlice(l)
|
||||
}
|
||||
deleted := 0
|
||||
newL := []string{}
|
||||
toDelete := len(l)
|
||||
if opts.count < 0 {
|
||||
toDelete = -opts.count
|
||||
}
|
||||
if opts.count > 0 {
|
||||
toDelete = opts.count
|
||||
}
|
||||
for _, el := range l {
|
||||
if el == opts.value {
|
||||
if toDelete > 0 {
|
||||
deleted++
|
||||
toDelete--
|
||||
continue
|
||||
}
|
||||
}
|
||||
newL = append(newL, el)
|
||||
}
|
||||
if opts.count < 0 {
|
||||
reverseSlice(newL)
|
||||
}
|
||||
if len(newL) == 0 {
|
||||
db.del(opts.key, true)
|
||||
} else {
|
||||
db.listKeys[opts.key] = newL
|
||||
db.keyVersion[opts.key]++
|
||||
}
|
||||
|
||||
c.WriteInt(deleted)
|
||||
})
|
||||
}
|
||||
|
||||
// LSET
|
||||
func (m *Miniredis) cmdLset(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
key string
|
||||
index int
|
||||
value string
|
||||
}
|
||||
opts.key = args[0]
|
||||
if ok := optInt(c, args[1], &opts.index); !ok {
|
||||
return
|
||||
}
|
||||
opts.value = args[2]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.key) {
|
||||
c.WriteError(msgKeyNotFound)
|
||||
return
|
||||
}
|
||||
if db.t(opts.key) != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
l := db.listKeys[opts.key]
|
||||
index := opts.index
|
||||
if index < 0 {
|
||||
index = len(l) + index
|
||||
}
|
||||
if index < 0 || index > len(l)-1 {
|
||||
c.WriteError(msgOutOfRange)
|
||||
return
|
||||
}
|
||||
l[index] = opts.value
|
||||
db.keyVersion[opts.key]++
|
||||
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// LTRIM
|
||||
func (m *Miniredis) cmdLtrim(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
key string
|
||||
start int
|
||||
end int
|
||||
}
|
||||
|
||||
opts.key = args[0]
|
||||
if ok := optInt(c, args[1], &opts.start); !ok {
|
||||
return
|
||||
}
|
||||
if ok := optInt(c, args[2], &opts.end); !ok {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
t, ok := db.keys[opts.key]
|
||||
if !ok {
|
||||
c.WriteOK()
|
||||
return
|
||||
}
|
||||
if t != "list" {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
|
||||
l := db.listKeys[opts.key]
|
||||
rs, re := redisRange(len(l), opts.start, opts.end, false)
|
||||
l = l[rs:re]
|
||||
if len(l) == 0 {
|
||||
db.del(opts.key, true)
|
||||
} else {
|
||||
db.listKeys[opts.key] = l
|
||||
db.keyVersion[opts.key]++
|
||||
}
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// RPOPLPUSH
|
||||
func (m *Miniredis) cmdRpoplpush(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
src, dst := args[0], args[1]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(src) {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if db.t(src) != "list" || (db.exists(dst) && db.t(dst) != "list") {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
elem := db.listPop(src)
|
||||
db.listLpush(dst, elem)
|
||||
c.WriteBulk(elem)
|
||||
})
|
||||
}
|
||||
|
||||
// BRPOPLPUSH
|
||||
func (m *Miniredis) cmdBrpoplpush(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
src string
|
||||
dst string
|
||||
timeout int
|
||||
}
|
||||
opts.src = args[0]
|
||||
opts.dst = args[1]
|
||||
if ok := optIntErr(c, args[2], &opts.timeout, msgInvalidTimeout); !ok {
|
||||
return
|
||||
}
|
||||
if opts.timeout < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgNegTimeout)
|
||||
return
|
||||
}
|
||||
|
||||
blocking(
|
||||
m,
|
||||
c,
|
||||
time.Duration(opts.timeout)*time.Second,
|
||||
func(c *server.Peer, ctx *connCtx) bool {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.src) {
|
||||
return false
|
||||
}
|
||||
if db.t(opts.src) != "list" || (db.exists(opts.dst) && db.t(opts.dst) != "list") {
|
||||
c.WriteError(msgWrongType)
|
||||
return true
|
||||
}
|
||||
if len(db.listKeys[opts.src]) == 0 {
|
||||
return false
|
||||
}
|
||||
elem := db.listPop(opts.src)
|
||||
db.listLpush(opts.dst, elem)
|
||||
c.WriteBulk(elem)
|
||||
return true
|
||||
},
|
||||
func(c *server.Peer) {
|
||||
// timeout
|
||||
c.WriteLen(-1)
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// LMOVE
|
||||
func (m *Miniredis) cmdLmove(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 4 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
src string
|
||||
dst string
|
||||
srcDir string
|
||||
dstDir string
|
||||
}{
|
||||
src: args[0],
|
||||
dst: args[1],
|
||||
srcDir: strings.ToLower(args[2]),
|
||||
dstDir: strings.ToLower(args[3]),
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.src) {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
if db.t(opts.src) != "list" || (db.exists(opts.dst) && db.t(opts.dst) != "list") {
|
||||
c.WriteError(msgWrongType)
|
||||
return
|
||||
}
|
||||
var elem string
|
||||
switch opts.srcDir {
|
||||
case "left":
|
||||
elem = db.listLpop(opts.src)
|
||||
case "right":
|
||||
elem = db.listPop(opts.src)
|
||||
default:
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
|
||||
switch opts.dstDir {
|
||||
case "left":
|
||||
db.listLpush(opts.dst, elem)
|
||||
case "right":
|
||||
db.listPush(opts.dst, elem)
|
||||
default:
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
c.WriteBulk(elem)
|
||||
})
|
||||
}
|
||||
256
vendor/github.com/alicebob/miniredis/v2/cmd_pubsub.go
generated
vendored
256
vendor/github.com/alicebob/miniredis/v2/cmd_pubsub.go
generated
vendored
@@ -1,256 +0,0 @@
|
||||
// Commands from https://redis.io/commands#pubsub
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// commandsPubsub handles all PUB/SUB operations.
|
||||
func commandsPubsub(m *Miniredis) {
|
||||
m.srv.Register("SUBSCRIBE", m.cmdSubscribe)
|
||||
m.srv.Register("UNSUBSCRIBE", m.cmdUnsubscribe)
|
||||
m.srv.Register("PSUBSCRIBE", m.cmdPsubscribe)
|
||||
m.srv.Register("PUNSUBSCRIBE", m.cmdPunsubscribe)
|
||||
m.srv.Register("PUBLISH", m.cmdPublish)
|
||||
m.srv.Register("PUBSUB", m.cmdPubSub)
|
||||
}
|
||||
|
||||
// SUBSCRIBE
|
||||
func (m *Miniredis) cmdSubscribe(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
sub := m.subscribedState(c)
|
||||
for _, channel := range args {
|
||||
n := sub.Subscribe(channel)
|
||||
c.Block(func(w *server.Writer) {
|
||||
w.WritePushLen(3)
|
||||
w.WriteBulk("subscribe")
|
||||
w.WriteBulk(channel)
|
||||
w.WriteInt(n)
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// UNSUBSCRIBE
|
||||
func (m *Miniredis) cmdUnsubscribe(c *server.Peer, cmd string, args []string) {
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
channels := args
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
sub := m.subscribedState(c)
|
||||
|
||||
if len(channels) == 0 {
|
||||
channels = sub.Channels()
|
||||
}
|
||||
|
||||
// there is no de-duplication
|
||||
for _, channel := range channels {
|
||||
n := sub.Unsubscribe(channel)
|
||||
c.Block(func(w *server.Writer) {
|
||||
w.WritePushLen(3)
|
||||
w.WriteBulk("unsubscribe")
|
||||
w.WriteBulk(channel)
|
||||
w.WriteInt(n)
|
||||
})
|
||||
}
|
||||
if len(channels) == 0 {
|
||||
// special case: there is always a reply
|
||||
c.Block(func(w *server.Writer) {
|
||||
w.WritePushLen(3)
|
||||
w.WriteBulk("unsubscribe")
|
||||
w.WriteNull()
|
||||
w.WriteInt(0)
|
||||
})
|
||||
}
|
||||
|
||||
if sub.Count() == 0 {
|
||||
endSubscriber(m, c)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// PSUBSCRIBE
|
||||
func (m *Miniredis) cmdPsubscribe(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
sub := m.subscribedState(c)
|
||||
for _, pat := range args {
|
||||
n := sub.Psubscribe(pat)
|
||||
c.Block(func(w *server.Writer) {
|
||||
w.WritePushLen(3)
|
||||
w.WriteBulk("psubscribe")
|
||||
w.WriteBulk(pat)
|
||||
w.WriteInt(n)
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// PUNSUBSCRIBE
|
||||
func (m *Miniredis) cmdPunsubscribe(c *server.Peer, cmd string, args []string) {
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
patterns := args
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
sub := m.subscribedState(c)
|
||||
|
||||
if len(patterns) == 0 {
|
||||
patterns = sub.Patterns()
|
||||
}
|
||||
|
||||
// there is no de-duplication
|
||||
for _, pat := range patterns {
|
||||
n := sub.Punsubscribe(pat)
|
||||
c.Block(func(w *server.Writer) {
|
||||
w.WritePushLen(3)
|
||||
w.WriteBulk("punsubscribe")
|
||||
w.WriteBulk(pat)
|
||||
w.WriteInt(n)
|
||||
})
|
||||
}
|
||||
if len(patterns) == 0 {
|
||||
// special case: there is always a reply
|
||||
c.Block(func(w *server.Writer) {
|
||||
w.WritePushLen(3)
|
||||
w.WriteBulk("punsubscribe")
|
||||
w.WriteNull()
|
||||
w.WriteInt(0)
|
||||
})
|
||||
}
|
||||
|
||||
if sub.Count() == 0 {
|
||||
endSubscriber(m, c)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// PUBLISH
|
||||
func (m *Miniredis) cmdPublish(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
channel, mesg := args[0], args[1]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
c.WriteInt(m.publish(channel, mesg))
|
||||
})
|
||||
}
|
||||
|
||||
// PUBSUB
|
||||
func (m *Miniredis) cmdPubSub(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
subcommand := strings.ToUpper(args[0])
|
||||
subargs := args[1:]
|
||||
var argsOk bool
|
||||
|
||||
switch subcommand {
|
||||
case "CHANNELS":
|
||||
argsOk = len(subargs) < 2
|
||||
case "NUMSUB":
|
||||
argsOk = true
|
||||
case "NUMPAT":
|
||||
argsOk = len(subargs) == 0
|
||||
default:
|
||||
argsOk = false
|
||||
}
|
||||
|
||||
if !argsOk {
|
||||
setDirty(c)
|
||||
c.WriteError(fmt.Sprintf(msgFPubsubUsage, subcommand))
|
||||
return
|
||||
}
|
||||
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
switch subcommand {
|
||||
case "CHANNELS":
|
||||
pat := ""
|
||||
if len(subargs) == 1 {
|
||||
pat = subargs[0]
|
||||
}
|
||||
|
||||
allsubs := m.allSubscribers()
|
||||
channels := activeChannels(allsubs, pat)
|
||||
|
||||
c.WriteLen(len(channels))
|
||||
for _, channel := range channels {
|
||||
c.WriteBulk(channel)
|
||||
}
|
||||
|
||||
case "NUMSUB":
|
||||
subs := m.allSubscribers()
|
||||
c.WriteLen(len(subargs) * 2)
|
||||
for _, channel := range subargs {
|
||||
c.WriteBulk(channel)
|
||||
c.WriteInt(countSubs(subs, channel))
|
||||
}
|
||||
|
||||
case "NUMPAT":
|
||||
c.WriteInt(countPsubs(m.allSubscribers()))
|
||||
}
|
||||
})
|
||||
}
|
||||
281
vendor/github.com/alicebob/miniredis/v2/cmd_scripting.go
generated
vendored
281
vendor/github.com/alicebob/miniredis/v2/cmd_scripting.go
generated
vendored
@@ -1,281 +0,0 @@
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
luajson "github.com/alicebob/gopher-json"
|
||||
lua "github.com/yuin/gopher-lua"
|
||||
"github.com/yuin/gopher-lua/parse"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
func commandsScripting(m *Miniredis) {
|
||||
m.srv.Register("EVAL", m.cmdEval)
|
||||
m.srv.Register("EVALSHA", m.cmdEvalsha)
|
||||
m.srv.Register("SCRIPT", m.cmdScript)
|
||||
}
|
||||
|
||||
// Execute lua. Needs to run m.Lock()ed, from within withTx().
|
||||
// Returns true if the lua was OK (and hence should be cached).
|
||||
func (m *Miniredis) runLuaScript(c *server.Peer, script string, args []string) bool {
|
||||
l := lua.NewState(lua.Options{SkipOpenLibs: true})
|
||||
defer l.Close()
|
||||
|
||||
// Taken from the go-lua manual
|
||||
for _, pair := range []struct {
|
||||
n string
|
||||
f lua.LGFunction
|
||||
}{
|
||||
{lua.LoadLibName, lua.OpenPackage},
|
||||
{lua.BaseLibName, lua.OpenBase},
|
||||
{lua.CoroutineLibName, lua.OpenCoroutine},
|
||||
{lua.TabLibName, lua.OpenTable},
|
||||
{lua.StringLibName, lua.OpenString},
|
||||
{lua.MathLibName, lua.OpenMath},
|
||||
{lua.DebugLibName, lua.OpenDebug},
|
||||
} {
|
||||
if err := l.CallByParam(lua.P{
|
||||
Fn: l.NewFunction(pair.f),
|
||||
NRet: 0,
|
||||
Protect: true,
|
||||
}, lua.LString(pair.n)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
luajson.Preload(l)
|
||||
requireGlobal(l, "cjson", "json")
|
||||
|
||||
// set global variable KEYS
|
||||
keysTable := l.NewTable()
|
||||
keysS, args := args[0], args[1:]
|
||||
keysLen, err := strconv.Atoi(keysS)
|
||||
if err != nil {
|
||||
c.WriteError(msgInvalidInt)
|
||||
return false
|
||||
}
|
||||
if keysLen < 0 {
|
||||
c.WriteError(msgNegativeKeysNumber)
|
||||
return false
|
||||
}
|
||||
if keysLen > len(args) {
|
||||
c.WriteError(msgInvalidKeysNumber)
|
||||
return false
|
||||
}
|
||||
keys, args := args[:keysLen], args[keysLen:]
|
||||
for i, k := range keys {
|
||||
l.RawSet(keysTable, lua.LNumber(i+1), lua.LString(k))
|
||||
}
|
||||
l.SetGlobal("KEYS", keysTable)
|
||||
|
||||
argvTable := l.NewTable()
|
||||
for i, a := range args {
|
||||
l.RawSet(argvTable, lua.LNumber(i+1), lua.LString(a))
|
||||
}
|
||||
l.SetGlobal("ARGV", argvTable)
|
||||
|
||||
redisFuncs, redisConstants := mkLua(m.srv, c)
|
||||
// Register command handlers
|
||||
l.Push(l.NewFunction(func(l *lua.LState) int {
|
||||
mod := l.RegisterModule("redis", redisFuncs).(*lua.LTable)
|
||||
for k, v := range redisConstants {
|
||||
mod.RawSetString(k, v)
|
||||
}
|
||||
l.Push(mod)
|
||||
return 1
|
||||
}))
|
||||
|
||||
l.DoString(protectGlobals)
|
||||
|
||||
l.Push(lua.LString("redis"))
|
||||
l.Call(1, 0)
|
||||
|
||||
if err := l.DoString(script); err != nil {
|
||||
c.WriteError(errLuaParseError(err))
|
||||
return false
|
||||
}
|
||||
|
||||
luaToRedis(l, c, l.Get(1))
|
||||
return true
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdEval(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
script, args := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
ok := m.runLuaScript(c, script, args)
|
||||
if ok {
|
||||
sha := sha1Hex(script)
|
||||
m.scripts[sha] = script
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdEvalsha(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
sha, args := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
script, ok := m.scripts[sha]
|
||||
if !ok {
|
||||
c.WriteError(msgNoScriptFound)
|
||||
return
|
||||
}
|
||||
|
||||
m.runLuaScript(c, script, args)
|
||||
})
|
||||
}
|
||||
|
||||
func (m *Miniredis) cmdScript(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
if getCtx(c).nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
|
||||
subcmd, args := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
switch strings.ToLower(subcmd) {
|
||||
case "load":
|
||||
if len(args) != 1 {
|
||||
c.WriteError(fmt.Sprintf(msgFScriptUsage, "LOAD"))
|
||||
return
|
||||
}
|
||||
script := args[0]
|
||||
|
||||
if _, err := parse.Parse(strings.NewReader(script), "user_script"); err != nil {
|
||||
c.WriteError(errLuaParseError(err))
|
||||
return
|
||||
}
|
||||
sha := sha1Hex(script)
|
||||
m.scripts[sha] = script
|
||||
c.WriteBulk(sha)
|
||||
|
||||
case "exists":
|
||||
c.WriteLen(len(args))
|
||||
for _, arg := range args {
|
||||
if _, ok := m.scripts[arg]; ok {
|
||||
c.WriteInt(1)
|
||||
} else {
|
||||
c.WriteInt(0)
|
||||
}
|
||||
}
|
||||
|
||||
case "flush":
|
||||
if len(args) == 1 {
|
||||
switch strings.ToUpper(args[0]) {
|
||||
case "SYNC", "ASYNC":
|
||||
args = args[1:]
|
||||
default:
|
||||
}
|
||||
}
|
||||
if len(args) != 0 {
|
||||
c.WriteError(msgScriptFlush)
|
||||
return
|
||||
}
|
||||
|
||||
m.scripts = map[string]string{}
|
||||
c.WriteOK()
|
||||
|
||||
default:
|
||||
c.WriteError(fmt.Sprintf(msgFScriptUsage, strings.ToUpper(subcmd)))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func sha1Hex(s string) string {
|
||||
h := sha1.New()
|
||||
io.WriteString(h, s)
|
||||
return hex.EncodeToString(h.Sum(nil))
|
||||
}
|
||||
|
||||
// requireGlobal imports module modName into the global namespace with the
|
||||
// identifier id. panics if an error results from the function execution
|
||||
func requireGlobal(l *lua.LState, id, modName string) {
|
||||
if err := l.CallByParam(lua.P{
|
||||
Fn: l.GetGlobal("require"),
|
||||
NRet: 1,
|
||||
Protect: true,
|
||||
}, lua.LString(modName)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
mod := l.Get(-1)
|
||||
l.Pop(1)
|
||||
|
||||
l.SetGlobal(id, mod)
|
||||
}
|
||||
|
||||
// the following script protects globals
|
||||
// it is based on: http://metalua.luaforge.net/src/lib/strict.lua.html
|
||||
var protectGlobals = `
|
||||
local dbg=debug
|
||||
local mt = {}
|
||||
setmetatable(_G, mt)
|
||||
mt.__newindex = function (t, n, v)
|
||||
if dbg.getinfo(2) then
|
||||
local w = dbg.getinfo(2, "S").what
|
||||
if w ~= "C" then
|
||||
error("Script attempted to create global variable '"..tostring(n).."'", 2)
|
||||
end
|
||||
end
|
||||
rawset(t, n, v)
|
||||
end
|
||||
mt.__index = function (t, n)
|
||||
if dbg.getinfo(2) and dbg.getinfo(2, "S").what ~= "C" then
|
||||
error("Script attempted to access nonexistent global variable '"..tostring(n).."'", 2)
|
||||
end
|
||||
return rawget(t, n)
|
||||
end
|
||||
debug = nil
|
||||
|
||||
`
|
||||
112
vendor/github.com/alicebob/miniredis/v2/cmd_server.go
generated
vendored
112
vendor/github.com/alicebob/miniredis/v2/cmd_server.go
generated
vendored
@@ -1,112 +0,0 @@
|
||||
// Commands from https://redis.io/commands#server
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
func commandsServer(m *Miniredis) {
|
||||
m.srv.Register("COMMAND", m.cmdCommand)
|
||||
m.srv.Register("DBSIZE", m.cmdDbsize)
|
||||
m.srv.Register("FLUSHALL", m.cmdFlushall)
|
||||
m.srv.Register("FLUSHDB", m.cmdFlushdb)
|
||||
m.srv.Register("INFO", m.cmdInfo)
|
||||
m.srv.Register("TIME", m.cmdTime)
|
||||
}
|
||||
|
||||
// DBSIZE
|
||||
func (m *Miniredis) cmdDbsize(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) > 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
c.WriteInt(len(db.keys))
|
||||
})
|
||||
}
|
||||
|
||||
// FLUSHALL
|
||||
func (m *Miniredis) cmdFlushall(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) > 0 && strings.ToLower(args[0]) == "async" {
|
||||
args = args[1:]
|
||||
}
|
||||
if len(args) > 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
m.flushAll()
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// FLUSHDB
|
||||
func (m *Miniredis) cmdFlushdb(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) > 0 && strings.ToLower(args[0]) == "async" {
|
||||
args = args[1:]
|
||||
}
|
||||
if len(args) > 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
m.db(ctx.selectedDB).flush()
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
|
||||
// TIME
|
||||
func (m *Miniredis) cmdTime(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) > 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
now := m.effectiveNow()
|
||||
nanos := now.UnixNano()
|
||||
seconds := nanos / 1_000_000_000
|
||||
microseconds := (nanos / 1_000) % 1_000_000
|
||||
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk(strconv.FormatInt(seconds, 10))
|
||||
c.WriteBulk(strconv.FormatInt(microseconds, 10))
|
||||
})
|
||||
}
|
||||
704
vendor/github.com/alicebob/miniredis/v2/cmd_set.go
generated
vendored
704
vendor/github.com/alicebob/miniredis/v2/cmd_set.go
generated
vendored
@@ -1,704 +0,0 @@
|
||||
// Commands from https://redis.io/commands#set
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// commandsSet handles all set value operations.
|
||||
func commandsSet(m *Miniredis) {
|
||||
m.srv.Register("SADD", m.cmdSadd)
|
||||
m.srv.Register("SCARD", m.cmdScard)
|
||||
m.srv.Register("SDIFF", m.cmdSdiff)
|
||||
m.srv.Register("SDIFFSTORE", m.cmdSdiffstore)
|
||||
m.srv.Register("SINTER", m.cmdSinter)
|
||||
m.srv.Register("SINTERSTORE", m.cmdSinterstore)
|
||||
m.srv.Register("SISMEMBER", m.cmdSismember)
|
||||
m.srv.Register("SMEMBERS", m.cmdSmembers)
|
||||
m.srv.Register("SMOVE", m.cmdSmove)
|
||||
m.srv.Register("SPOP", m.cmdSpop)
|
||||
m.srv.Register("SRANDMEMBER", m.cmdSrandmember)
|
||||
m.srv.Register("SREM", m.cmdSrem)
|
||||
m.srv.Register("SUNION", m.cmdSunion)
|
||||
m.srv.Register("SUNIONSTORE", m.cmdSunionstore)
|
||||
m.srv.Register("SSCAN", m.cmdSscan)
|
||||
}
|
||||
|
||||
// SADD
|
||||
func (m *Miniredis) cmdSadd(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, elems := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if db.exists(key) && db.t(key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
added := db.setAdd(key, elems...)
|
||||
c.WriteInt(added)
|
||||
})
|
||||
}
|
||||
|
||||
// SCARD
|
||||
func (m *Miniredis) cmdScard(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(key) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
members := db.setMembers(key)
|
||||
c.WriteInt(len(members))
|
||||
})
|
||||
}
|
||||
|
||||
// SDIFF
|
||||
func (m *Miniredis) cmdSdiff(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
keys := args
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
set, err := db.setDiff(keys)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteSetLen(len(set))
|
||||
for k := range set {
|
||||
c.WriteBulk(k)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// SDIFFSTORE
|
||||
func (m *Miniredis) cmdSdiffstore(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
dest, keys := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
set, err := db.setDiff(keys)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
db.del(dest, true)
|
||||
db.setSet(dest, set)
|
||||
c.WriteInt(len(set))
|
||||
})
|
||||
}
|
||||
|
||||
// SINTER
|
||||
func (m *Miniredis) cmdSinter(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
keys := args
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
set, err := db.setInter(keys)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteLen(len(set))
|
||||
for k := range set {
|
||||
c.WriteBulk(k)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// SINTERSTORE
|
||||
func (m *Miniredis) cmdSinterstore(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
dest, keys := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
set, err := db.setInter(keys)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
db.del(dest, true)
|
||||
db.setSet(dest, set)
|
||||
c.WriteInt(len(set))
|
||||
})
|
||||
}
|
||||
|
||||
// SISMEMBER
|
||||
func (m *Miniredis) cmdSismember(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, value := args[0], args[1]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(key) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
if db.setIsMember(key, value) {
|
||||
c.WriteInt(1)
|
||||
return
|
||||
}
|
||||
c.WriteInt(0)
|
||||
})
|
||||
}
|
||||
|
||||
// SMEMBERS
|
||||
func (m *Miniredis) cmdSmembers(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(key) {
|
||||
c.WriteSetLen(0)
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
members := db.setMembers(key)
|
||||
|
||||
c.WriteSetLen(len(members))
|
||||
for _, elem := range members {
|
||||
c.WriteBulk(elem)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// SMOVE
|
||||
func (m *Miniredis) cmdSmove(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 3 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
src, dst, member := args[0], args[1], args[2]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(src) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(src) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
if db.exists(dst) && db.t(dst) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
if !db.setIsMember(src, member) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
db.setRem(src, member)
|
||||
db.setAdd(dst, member)
|
||||
c.WriteInt(1)
|
||||
})
|
||||
}
|
||||
|
||||
// SPOP
|
||||
func (m *Miniredis) cmdSpop(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
opts := struct {
|
||||
key string
|
||||
withCount bool
|
||||
count int
|
||||
}{
|
||||
count: 1,
|
||||
}
|
||||
opts.key, args = args[0], args[1:]
|
||||
|
||||
if len(args) > 0 {
|
||||
v, err := strconv.Atoi(args[0])
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
if v < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgOutOfRange)
|
||||
return
|
||||
}
|
||||
opts.count = v
|
||||
opts.withCount = true
|
||||
args = args[1:]
|
||||
}
|
||||
if len(args) > 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(opts.key) {
|
||||
if !opts.withCount {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
c.WriteLen(0)
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(opts.key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
var deleted []string
|
||||
for i := 0; i < opts.count; i++ {
|
||||
members := db.setMembers(opts.key)
|
||||
if len(members) == 0 {
|
||||
break
|
||||
}
|
||||
member := members[m.randIntn(len(members))]
|
||||
db.setRem(opts.key, member)
|
||||
deleted = append(deleted, member)
|
||||
}
|
||||
// without `count` return a single value
|
||||
if !opts.withCount {
|
||||
if len(deleted) == 0 {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
c.WriteBulk(deleted[0])
|
||||
return
|
||||
}
|
||||
// with `count` return a list
|
||||
c.WriteLen(len(deleted))
|
||||
for _, v := range deleted {
|
||||
c.WriteBulk(v)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// SRANDMEMBER
|
||||
func (m *Miniredis) cmdSrandmember(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if len(args) > 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key := args[0]
|
||||
count := 0
|
||||
withCount := false
|
||||
if len(args) == 2 {
|
||||
var err error
|
||||
count, err = strconv.Atoi(args[1])
|
||||
if err != nil {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
withCount = true
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(key) {
|
||||
c.WriteNull()
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
members := db.setMembers(key)
|
||||
if count < 0 {
|
||||
// Non-unique elements is allowed with negative count.
|
||||
c.WriteLen(-count)
|
||||
for count != 0 {
|
||||
member := members[m.randIntn(len(members))]
|
||||
c.WriteBulk(member)
|
||||
count++
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Must be unique elements.
|
||||
m.shuffle(members)
|
||||
if count > len(members) {
|
||||
count = len(members)
|
||||
}
|
||||
if !withCount {
|
||||
c.WriteBulk(members[0])
|
||||
return
|
||||
}
|
||||
c.WriteLen(count)
|
||||
for i := range make([]struct{}, count) {
|
||||
c.WriteBulk(members[i])
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// SREM
|
||||
func (m *Miniredis) cmdSrem(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
key, fields := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
if !db.exists(key) {
|
||||
c.WriteInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
if db.t(key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteInt(db.setRem(key, fields...))
|
||||
})
|
||||
}
|
||||
|
||||
// SUNION
|
||||
func (m *Miniredis) cmdSunion(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 1 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
keys := args
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
set, err := db.setUnion(keys)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
c.WriteLen(len(set))
|
||||
for k := range set {
|
||||
c.WriteBulk(k)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// SUNIONSTORE
|
||||
func (m *Miniredis) cmdSunionstore(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
dest, keys := args[0], args[1:]
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
set, err := db.setUnion(keys)
|
||||
if err != nil {
|
||||
c.WriteError(err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
db.del(dest, true)
|
||||
db.setSet(dest, set)
|
||||
c.WriteInt(len(set))
|
||||
})
|
||||
}
|
||||
|
||||
// SSCAN
|
||||
func (m *Miniredis) cmdSscan(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
key string
|
||||
value int
|
||||
cursor int
|
||||
count int
|
||||
withMatch bool
|
||||
match string
|
||||
}
|
||||
|
||||
opts.key = args[0]
|
||||
if ok := optIntErr(c, args[1], &opts.cursor, msgInvalidCursor); !ok {
|
||||
return
|
||||
}
|
||||
args = args[2:]
|
||||
|
||||
// MATCH and COUNT options
|
||||
for len(args) > 0 {
|
||||
if strings.ToLower(args[0]) == "count" {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
count, err := strconv.Atoi(args[1])
|
||||
if err != nil || count < 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgInvalidInt)
|
||||
return
|
||||
}
|
||||
if count == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
opts.count = count
|
||||
args = args[2:]
|
||||
continue
|
||||
}
|
||||
if strings.ToLower(args[0]) == "match" {
|
||||
if len(args) < 2 {
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
opts.withMatch = true
|
||||
opts.match = args[1]
|
||||
args = args[2:]
|
||||
continue
|
||||
}
|
||||
setDirty(c)
|
||||
c.WriteError(msgSyntaxError)
|
||||
return
|
||||
}
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
db := m.db(ctx.selectedDB)
|
||||
// return _all_ (matched) keys every time
|
||||
if db.exists(opts.key) && db.t(opts.key) != "set" {
|
||||
c.WriteError(ErrWrongType.Error())
|
||||
return
|
||||
}
|
||||
members := db.setMembers(opts.key)
|
||||
if opts.withMatch {
|
||||
members, _ = matchKeys(members, opts.match)
|
||||
}
|
||||
low := opts.cursor
|
||||
high := low + opts.count
|
||||
// validate high is correct
|
||||
if high > len(members) || high == 0 {
|
||||
high = len(members)
|
||||
}
|
||||
if opts.cursor > high {
|
||||
// invalid cursor
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk("0") // no next cursor
|
||||
c.WriteLen(0) // no elements
|
||||
return
|
||||
}
|
||||
cursorValue := low + opts.count
|
||||
if cursorValue > len(members) {
|
||||
cursorValue = 0 // no next cursor
|
||||
}
|
||||
members = members[low:high]
|
||||
c.WriteLen(2)
|
||||
c.WriteBulk(fmt.Sprintf("%d", cursorValue))
|
||||
c.WriteLen(len(members))
|
||||
for _, k := range members {
|
||||
c.WriteBulk(k)
|
||||
}
|
||||
|
||||
})
|
||||
}
|
||||
1880
vendor/github.com/alicebob/miniredis/v2/cmd_sorted_set.go
generated
vendored
1880
vendor/github.com/alicebob/miniredis/v2/cmd_sorted_set.go
generated
vendored
File diff suppressed because it is too large
Load Diff
1704
vendor/github.com/alicebob/miniredis/v2/cmd_stream.go
generated
vendored
1704
vendor/github.com/alicebob/miniredis/v2/cmd_stream.go
generated
vendored
File diff suppressed because it is too large
Load Diff
1350
vendor/github.com/alicebob/miniredis/v2/cmd_string.go
generated
vendored
1350
vendor/github.com/alicebob/miniredis/v2/cmd_string.go
generated
vendored
File diff suppressed because it is too large
Load Diff
179
vendor/github.com/alicebob/miniredis/v2/cmd_transactions.go
generated
vendored
179
vendor/github.com/alicebob/miniredis/v2/cmd_transactions.go
generated
vendored
@@ -1,179 +0,0 @@
|
||||
// Commands from https://redis.io/commands#transactions
|
||||
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"github.com/alicebob/miniredis/v2/server"
|
||||
)
|
||||
|
||||
// commandsTransaction handles MULTI &c.
|
||||
func commandsTransaction(m *Miniredis) {
|
||||
m.srv.Register("DISCARD", m.cmdDiscard)
|
||||
m.srv.Register("EXEC", m.cmdExec)
|
||||
m.srv.Register("MULTI", m.cmdMulti)
|
||||
m.srv.Register("UNWATCH", m.cmdUnwatch)
|
||||
m.srv.Register("WATCH", m.cmdWatch)
|
||||
}
|
||||
|
||||
// MULTI
|
||||
func (m *Miniredis) cmdMulti(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 0 {
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
ctx := getCtx(c)
|
||||
if ctx.nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
if inTx(ctx) {
|
||||
c.WriteError("ERR MULTI calls can not be nested")
|
||||
return
|
||||
}
|
||||
|
||||
startTx(ctx)
|
||||
|
||||
c.WriteOK()
|
||||
}
|
||||
|
||||
// EXEC
|
||||
func (m *Miniredis) cmdExec(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
ctx := getCtx(c)
|
||||
if ctx.nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
if !inTx(ctx) {
|
||||
c.WriteError("ERR EXEC without MULTI")
|
||||
return
|
||||
}
|
||||
|
||||
if ctx.dirtyTransaction {
|
||||
c.WriteError("EXECABORT Transaction discarded because of previous errors.")
|
||||
// a failed EXEC finishes the tx
|
||||
stopTx(ctx)
|
||||
return
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
// Check WATCHed keys.
|
||||
for t, version := range ctx.watch {
|
||||
if m.db(t.db).keyVersion[t.key] > version {
|
||||
// Abort! Abort!
|
||||
stopTx(ctx)
|
||||
c.WriteLen(-1)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
c.WriteLen(len(ctx.transaction))
|
||||
for _, cb := range ctx.transaction {
|
||||
cb(c, ctx)
|
||||
}
|
||||
// wake up anyone who waits on anything.
|
||||
m.signal.Broadcast()
|
||||
|
||||
stopTx(ctx)
|
||||
}
|
||||
|
||||
// DISCARD
|
||||
func (m *Miniredis) cmdDiscard(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
ctx := getCtx(c)
|
||||
if !inTx(ctx) {
|
||||
c.WriteError("ERR DISCARD without MULTI")
|
||||
return
|
||||
}
|
||||
|
||||
stopTx(ctx)
|
||||
c.WriteOK()
|
||||
}
|
||||
|
||||
// WATCH
|
||||
func (m *Miniredis) cmdWatch(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) == 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
ctx := getCtx(c)
|
||||
if ctx.nested {
|
||||
c.WriteError(msgNotFromScripts)
|
||||
return
|
||||
}
|
||||
if inTx(ctx) {
|
||||
c.WriteError("ERR WATCH in MULTI")
|
||||
return
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
db := m.db(ctx.selectedDB)
|
||||
|
||||
for _, key := range args {
|
||||
watch(db, ctx, key)
|
||||
}
|
||||
c.WriteOK()
|
||||
}
|
||||
|
||||
// UNWATCH
|
||||
func (m *Miniredis) cmdUnwatch(c *server.Peer, cmd string, args []string) {
|
||||
if len(args) != 0 {
|
||||
setDirty(c)
|
||||
c.WriteError(errWrongNumber(cmd))
|
||||
return
|
||||
}
|
||||
if !m.handleAuth(c) {
|
||||
return
|
||||
}
|
||||
if m.checkPubsub(c, cmd) {
|
||||
return
|
||||
}
|
||||
|
||||
// Doesn't matter if UNWATCH is in a TX or not. Looks like a Redis bug to me.
|
||||
unwatch(getCtx(c))
|
||||
|
||||
withTx(m, c, func(c *server.Peer, ctx *connCtx) {
|
||||
// Do nothing if it's called in a transaction.
|
||||
c.WriteOK()
|
||||
})
|
||||
}
|
||||
708
vendor/github.com/alicebob/miniredis/v2/db.go
generated
vendored
708
vendor/github.com/alicebob/miniredis/v2/db.go
generated
vendored
@@ -1,708 +0,0 @@
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"sort"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
errInvalidEntryID = errors.New("stream ID is invalid")
|
||||
)
|
||||
|
||||
func (db *RedisDB) exists(k string) bool {
|
||||
_, ok := db.keys[k]
|
||||
return ok
|
||||
}
|
||||
|
||||
// t gives the type of a key, or ""
|
||||
func (db *RedisDB) t(k string) string {
|
||||
return db.keys[k]
|
||||
}
|
||||
|
||||
// allKeys returns all keys. Sorted.
|
||||
func (db *RedisDB) allKeys() []string {
|
||||
res := make([]string, 0, len(db.keys))
|
||||
for k := range db.keys {
|
||||
res = append(res, k)
|
||||
}
|
||||
sort.Strings(res) // To make things deterministic.
|
||||
return res
|
||||
}
|
||||
|
||||
// flush removes all keys and values.
|
||||
func (db *RedisDB) flush() {
|
||||
db.keys = map[string]string{}
|
||||
db.stringKeys = map[string]string{}
|
||||
db.hashKeys = map[string]hashKey{}
|
||||
db.listKeys = map[string]listKey{}
|
||||
db.setKeys = map[string]setKey{}
|
||||
db.hllKeys = map[string]*hll{}
|
||||
db.sortedsetKeys = map[string]sortedSet{}
|
||||
db.ttl = map[string]time.Duration{}
|
||||
db.streamKeys = map[string]*streamKey{}
|
||||
}
|
||||
|
||||
// move something to another db. Will return ok. Or not.
|
||||
func (db *RedisDB) move(key string, to *RedisDB) bool {
|
||||
if _, ok := to.keys[key]; ok {
|
||||
return false
|
||||
}
|
||||
|
||||
t, ok := db.keys[key]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
to.keys[key] = db.keys[key]
|
||||
switch t {
|
||||
case "string":
|
||||
to.stringKeys[key] = db.stringKeys[key]
|
||||
case "hash":
|
||||
to.hashKeys[key] = db.hashKeys[key]
|
||||
case "list":
|
||||
to.listKeys[key] = db.listKeys[key]
|
||||
case "set":
|
||||
to.setKeys[key] = db.setKeys[key]
|
||||
case "zset":
|
||||
to.sortedsetKeys[key] = db.sortedsetKeys[key]
|
||||
case "stream":
|
||||
to.streamKeys[key] = db.streamKeys[key]
|
||||
case "hll":
|
||||
to.hllKeys[key] = db.hllKeys[key]
|
||||
default:
|
||||
panic("unhandled key type")
|
||||
}
|
||||
to.keyVersion[key]++
|
||||
if v, ok := db.ttl[key]; ok {
|
||||
to.ttl[key] = v
|
||||
}
|
||||
db.del(key, true)
|
||||
return true
|
||||
}
|
||||
|
||||
func (db *RedisDB) rename(from, to string) {
|
||||
db.del(to, true)
|
||||
switch db.t(from) {
|
||||
case "string":
|
||||
db.stringKeys[to] = db.stringKeys[from]
|
||||
case "hash":
|
||||
db.hashKeys[to] = db.hashKeys[from]
|
||||
case "list":
|
||||
db.listKeys[to] = db.listKeys[from]
|
||||
case "set":
|
||||
db.setKeys[to] = db.setKeys[from]
|
||||
case "zset":
|
||||
db.sortedsetKeys[to] = db.sortedsetKeys[from]
|
||||
case "stream":
|
||||
db.streamKeys[to] = db.streamKeys[from]
|
||||
case "hll":
|
||||
db.hllKeys[to] = db.hllKeys[from]
|
||||
default:
|
||||
panic("missing case")
|
||||
}
|
||||
db.keys[to] = db.keys[from]
|
||||
db.keyVersion[to]++
|
||||
if v, ok := db.ttl[from]; ok {
|
||||
db.ttl[to] = v
|
||||
}
|
||||
|
||||
db.del(from, true)
|
||||
}
|
||||
|
||||
func (db *RedisDB) del(k string, delTTL bool) {
|
||||
if !db.exists(k) {
|
||||
return
|
||||
}
|
||||
t := db.t(k)
|
||||
delete(db.keys, k)
|
||||
db.keyVersion[k]++
|
||||
if delTTL {
|
||||
delete(db.ttl, k)
|
||||
}
|
||||
switch t {
|
||||
case "string":
|
||||
delete(db.stringKeys, k)
|
||||
case "hash":
|
||||
delete(db.hashKeys, k)
|
||||
case "list":
|
||||
delete(db.listKeys, k)
|
||||
case "set":
|
||||
delete(db.setKeys, k)
|
||||
case "zset":
|
||||
delete(db.sortedsetKeys, k)
|
||||
case "stream":
|
||||
delete(db.streamKeys, k)
|
||||
case "hll":
|
||||
delete(db.hllKeys, k)
|
||||
default:
|
||||
panic("Unknown key type: " + t)
|
||||
}
|
||||
}
|
||||
|
||||
// stringGet returns the string key or "" on error/nonexists.
|
||||
func (db *RedisDB) stringGet(k string) string {
|
||||
if t, ok := db.keys[k]; !ok || t != "string" {
|
||||
return ""
|
||||
}
|
||||
return db.stringKeys[k]
|
||||
}
|
||||
|
||||
// stringSet force set()s a key. Does not touch expire.
|
||||
func (db *RedisDB) stringSet(k, v string) {
|
||||
db.del(k, false)
|
||||
db.keys[k] = "string"
|
||||
db.stringKeys[k] = v
|
||||
db.keyVersion[k]++
|
||||
}
|
||||
|
||||
// change int key value
|
||||
func (db *RedisDB) stringIncr(k string, delta int) (int, error) {
|
||||
v := 0
|
||||
if sv, ok := db.stringKeys[k]; ok {
|
||||
var err error
|
||||
v, err = strconv.Atoi(sv)
|
||||
if err != nil {
|
||||
return 0, ErrIntValueError
|
||||
}
|
||||
}
|
||||
v += delta
|
||||
db.stringSet(k, strconv.Itoa(v))
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// change float key value
|
||||
func (db *RedisDB) stringIncrfloat(k string, delta *big.Float) (*big.Float, error) {
|
||||
v := big.NewFloat(0.0)
|
||||
v.SetPrec(128)
|
||||
if sv, ok := db.stringKeys[k]; ok {
|
||||
var err error
|
||||
v, _, err = big.ParseFloat(sv, 10, 128, 0)
|
||||
if err != nil {
|
||||
return nil, ErrFloatValueError
|
||||
}
|
||||
}
|
||||
v.Add(v, delta)
|
||||
db.stringSet(k, formatBig(v))
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// listLpush is 'left push', aka unshift. Returns the new length.
|
||||
func (db *RedisDB) listLpush(k, v string) int {
|
||||
l, ok := db.listKeys[k]
|
||||
if !ok {
|
||||
db.keys[k] = "list"
|
||||
}
|
||||
l = append([]string{v}, l...)
|
||||
db.listKeys[k] = l
|
||||
db.keyVersion[k]++
|
||||
return len(l)
|
||||
}
|
||||
|
||||
// 'left pop', aka shift.
|
||||
func (db *RedisDB) listLpop(k string) string {
|
||||
l := db.listKeys[k]
|
||||
el := l[0]
|
||||
l = l[1:]
|
||||
if len(l) == 0 {
|
||||
db.del(k, true)
|
||||
} else {
|
||||
db.listKeys[k] = l
|
||||
}
|
||||
db.keyVersion[k]++
|
||||
return el
|
||||
}
|
||||
|
||||
func (db *RedisDB) listPush(k string, v ...string) int {
|
||||
l, ok := db.listKeys[k]
|
||||
if !ok {
|
||||
db.keys[k] = "list"
|
||||
}
|
||||
l = append(l, v...)
|
||||
db.listKeys[k] = l
|
||||
db.keyVersion[k]++
|
||||
return len(l)
|
||||
}
|
||||
|
||||
func (db *RedisDB) listPop(k string) string {
|
||||
l := db.listKeys[k]
|
||||
el := l[len(l)-1]
|
||||
l = l[:len(l)-1]
|
||||
if len(l) == 0 {
|
||||
db.del(k, true)
|
||||
} else {
|
||||
db.listKeys[k] = l
|
||||
db.keyVersion[k]++
|
||||
}
|
||||
return el
|
||||
}
|
||||
|
||||
// setset replaces a whole set.
|
||||
func (db *RedisDB) setSet(k string, set setKey) {
|
||||
db.keys[k] = "set"
|
||||
db.setKeys[k] = set
|
||||
db.keyVersion[k]++
|
||||
}
|
||||
|
||||
// setadd adds members to a set. Returns nr of new keys.
|
||||
func (db *RedisDB) setAdd(k string, elems ...string) int {
|
||||
s, ok := db.setKeys[k]
|
||||
if !ok {
|
||||
s = setKey{}
|
||||
db.keys[k] = "set"
|
||||
}
|
||||
added := 0
|
||||
for _, e := range elems {
|
||||
if _, ok := s[e]; !ok {
|
||||
added++
|
||||
}
|
||||
s[e] = struct{}{}
|
||||
}
|
||||
db.setKeys[k] = s
|
||||
db.keyVersion[k]++
|
||||
return added
|
||||
}
|
||||
|
||||
// setrem removes members from a set. Returns nr of deleted keys.
|
||||
func (db *RedisDB) setRem(k string, fields ...string) int {
|
||||
s, ok := db.setKeys[k]
|
||||
if !ok {
|
||||
return 0
|
||||
}
|
||||
removed := 0
|
||||
for _, f := range fields {
|
||||
if _, ok := s[f]; ok {
|
||||
removed++
|
||||
delete(s, f)
|
||||
}
|
||||
}
|
||||
if len(s) == 0 {
|
||||
db.del(k, true)
|
||||
} else {
|
||||
db.setKeys[k] = s
|
||||
}
|
||||
db.keyVersion[k]++
|
||||
return removed
|
||||
}
|
||||
|
||||
// All members of a set.
|
||||
func (db *RedisDB) setMembers(k string) []string {
|
||||
set := db.setKeys[k]
|
||||
members := make([]string, 0, len(set))
|
||||
for k := range set {
|
||||
members = append(members, k)
|
||||
}
|
||||
sort.Strings(members)
|
||||
return members
|
||||
}
|
||||
|
||||
// Is a SET value present?
|
||||
func (db *RedisDB) setIsMember(k, v string) bool {
|
||||
set, ok := db.setKeys[k]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
_, ok = set[v]
|
||||
return ok
|
||||
}
|
||||
|
||||
// hashFields returns all (sorted) keys ('fields') for a hash key.
|
||||
func (db *RedisDB) hashFields(k string) []string {
|
||||
v := db.hashKeys[k]
|
||||
var r []string
|
||||
for k := range v {
|
||||
r = append(r, k)
|
||||
}
|
||||
sort.Strings(r)
|
||||
return r
|
||||
}
|
||||
|
||||
// hashValues returns all (sorted) values a hash key.
|
||||
func (db *RedisDB) hashValues(k string) []string {
|
||||
h := db.hashKeys[k]
|
||||
var r []string
|
||||
for _, v := range h {
|
||||
r = append(r, v)
|
||||
}
|
||||
sort.Strings(r)
|
||||
return r
|
||||
}
|
||||
|
||||
// hashGet a value
|
||||
func (db *RedisDB) hashGet(key, field string) string {
|
||||
return db.hashKeys[key][field]
|
||||
}
|
||||
|
||||
// hashSet returns the number of new keys
|
||||
func (db *RedisDB) hashSet(k string, fv ...string) int {
|
||||
if t, ok := db.keys[k]; ok && t != "hash" {
|
||||
db.del(k, true)
|
||||
}
|
||||
db.keys[k] = "hash"
|
||||
if _, ok := db.hashKeys[k]; !ok {
|
||||
db.hashKeys[k] = map[string]string{}
|
||||
}
|
||||
new := 0
|
||||
for idx := 0; idx < len(fv)-1; idx = idx + 2 {
|
||||
f, v := fv[idx], fv[idx+1]
|
||||
_, ok := db.hashKeys[k][f]
|
||||
db.hashKeys[k][f] = v
|
||||
db.keyVersion[k]++
|
||||
if !ok {
|
||||
new++
|
||||
}
|
||||
}
|
||||
return new
|
||||
}
|
||||
|
||||
// hashIncr changes int key value
|
||||
func (db *RedisDB) hashIncr(key, field string, delta int) (int, error) {
|
||||
v := 0
|
||||
if h, ok := db.hashKeys[key]; ok {
|
||||
if f, ok := h[field]; ok {
|
||||
var err error
|
||||
v, err = strconv.Atoi(f)
|
||||
if err != nil {
|
||||
return 0, ErrIntValueError
|
||||
}
|
||||
}
|
||||
}
|
||||
v += delta
|
||||
db.hashSet(key, field, strconv.Itoa(v))
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// hashIncrfloat changes float key value
|
||||
func (db *RedisDB) hashIncrfloat(key, field string, delta *big.Float) (*big.Float, error) {
|
||||
v := big.NewFloat(0.0)
|
||||
v.SetPrec(128)
|
||||
if h, ok := db.hashKeys[key]; ok {
|
||||
if f, ok := h[field]; ok {
|
||||
var err error
|
||||
v, _, err = big.ParseFloat(f, 10, 128, 0)
|
||||
if err != nil {
|
||||
return nil, ErrFloatValueError
|
||||
}
|
||||
}
|
||||
}
|
||||
v.Add(v, delta)
|
||||
db.hashSet(key, field, formatBig(v))
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// sortedSet set returns a sortedSet as map
|
||||
func (db *RedisDB) sortedSet(key string) map[string]float64 {
|
||||
ss := db.sortedsetKeys[key]
|
||||
return map[string]float64(ss)
|
||||
}
|
||||
|
||||
// ssetSet sets a complete sorted set.
|
||||
func (db *RedisDB) ssetSet(key string, sset sortedSet) {
|
||||
db.keys[key] = "zset"
|
||||
db.keyVersion[key]++
|
||||
db.sortedsetKeys[key] = sset
|
||||
}
|
||||
|
||||
// ssetAdd adds member to a sorted set. Returns whether this was a new member.
|
||||
func (db *RedisDB) ssetAdd(key string, score float64, member string) bool {
|
||||
ss, ok := db.sortedsetKeys[key]
|
||||
if !ok {
|
||||
ss = newSortedSet()
|
||||
db.keys[key] = "zset"
|
||||
}
|
||||
_, ok = ss[member]
|
||||
ss[member] = score
|
||||
db.sortedsetKeys[key] = ss
|
||||
db.keyVersion[key]++
|
||||
return !ok
|
||||
}
|
||||
|
||||
// All members from a sorted set, ordered by score.
|
||||
func (db *RedisDB) ssetMembers(key string) []string {
|
||||
ss, ok := db.sortedsetKeys[key]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
elems := ss.byScore(asc)
|
||||
members := make([]string, 0, len(elems))
|
||||
for _, e := range elems {
|
||||
members = append(members, e.member)
|
||||
}
|
||||
return members
|
||||
}
|
||||
|
||||
// All members+scores from a sorted set, ordered by score.
|
||||
func (db *RedisDB) ssetElements(key string) ssElems {
|
||||
ss, ok := db.sortedsetKeys[key]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return ss.byScore(asc)
|
||||
}
|
||||
|
||||
func (db *RedisDB) ssetRandomMember(key string) string {
|
||||
elems := db.ssetElements(key)
|
||||
if len(elems) == 0 {
|
||||
return ""
|
||||
}
|
||||
return elems[db.master.randIntn(len(elems))].member
|
||||
}
|
||||
|
||||
// ssetCard is the sorted set cardinality.
|
||||
func (db *RedisDB) ssetCard(key string) int {
|
||||
ss := db.sortedsetKeys[key]
|
||||
return ss.card()
|
||||
}
|
||||
|
||||
// ssetRank is the sorted set rank.
|
||||
func (db *RedisDB) ssetRank(key, member string, d direction) (int, bool) {
|
||||
ss := db.sortedsetKeys[key]
|
||||
return ss.rankByScore(member, d)
|
||||
}
|
||||
|
||||
// ssetScore is sorted set score.
|
||||
func (db *RedisDB) ssetScore(key, member string) float64 {
|
||||
ss := db.sortedsetKeys[key]
|
||||
return ss[member]
|
||||
}
|
||||
|
||||
// ssetRem is sorted set key delete.
|
||||
func (db *RedisDB) ssetRem(key, member string) bool {
|
||||
ss := db.sortedsetKeys[key]
|
||||
_, ok := ss[member]
|
||||
delete(ss, member)
|
||||
if len(ss) == 0 {
|
||||
// Delete key on removal of last member
|
||||
db.del(key, true)
|
||||
}
|
||||
return ok
|
||||
}
|
||||
|
||||
// ssetExists tells if a member exists in a sorted set.
|
||||
func (db *RedisDB) ssetExists(key, member string) bool {
|
||||
ss := db.sortedsetKeys[key]
|
||||
_, ok := ss[member]
|
||||
return ok
|
||||
}
|
||||
|
||||
// ssetIncrby changes float sorted set score.
|
||||
func (db *RedisDB) ssetIncrby(k, m string, delta float64) float64 {
|
||||
ss, ok := db.sortedsetKeys[k]
|
||||
if !ok {
|
||||
ss = newSortedSet()
|
||||
db.keys[k] = "zset"
|
||||
db.sortedsetKeys[k] = ss
|
||||
}
|
||||
|
||||
v, _ := ss.get(m)
|
||||
v += delta
|
||||
ss.set(v, m)
|
||||
db.keyVersion[k]++
|
||||
return v
|
||||
}
|
||||
|
||||
// setDiff implements the logic behind SDIFF*
|
||||
func (db *RedisDB) setDiff(keys []string) (setKey, error) {
|
||||
key := keys[0]
|
||||
keys = keys[1:]
|
||||
if db.exists(key) && db.t(key) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
s := setKey{}
|
||||
for k := range db.setKeys[key] {
|
||||
s[k] = struct{}{}
|
||||
}
|
||||
for _, sk := range keys {
|
||||
if !db.exists(sk) {
|
||||
continue
|
||||
}
|
||||
if db.t(sk) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
for e := range db.setKeys[sk] {
|
||||
delete(s, e)
|
||||
}
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// setInter implements the logic behind SINTER*
|
||||
// len keys needs to be > 0
|
||||
func (db *RedisDB) setInter(keys []string) (setKey, error) {
|
||||
// all keys must either not exist, or be of type "set".
|
||||
for _, key := range keys {
|
||||
if db.exists(key) && db.t(key) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
}
|
||||
|
||||
key := keys[0]
|
||||
keys = keys[1:]
|
||||
if !db.exists(key) {
|
||||
return nil, nil
|
||||
}
|
||||
if db.t(key) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
s := setKey{}
|
||||
for k := range db.setKeys[key] {
|
||||
s[k] = struct{}{}
|
||||
}
|
||||
for _, sk := range keys {
|
||||
if !db.exists(sk) {
|
||||
return setKey{}, nil
|
||||
}
|
||||
if db.t(sk) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
other := db.setKeys[sk]
|
||||
for e := range s {
|
||||
if _, ok := other[e]; ok {
|
||||
continue
|
||||
}
|
||||
delete(s, e)
|
||||
}
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// setUnion implements the logic behind SUNION*
|
||||
func (db *RedisDB) setUnion(keys []string) (setKey, error) {
|
||||
key := keys[0]
|
||||
keys = keys[1:]
|
||||
if db.exists(key) && db.t(key) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
s := setKey{}
|
||||
for k := range db.setKeys[key] {
|
||||
s[k] = struct{}{}
|
||||
}
|
||||
for _, sk := range keys {
|
||||
if !db.exists(sk) {
|
||||
continue
|
||||
}
|
||||
if db.t(sk) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
for e := range db.setKeys[sk] {
|
||||
s[e] = struct{}{}
|
||||
}
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
func (db *RedisDB) newStream(key string) (*streamKey, error) {
|
||||
if s, err := db.stream(key); err != nil {
|
||||
return nil, err
|
||||
} else if s != nil {
|
||||
return nil, fmt.Errorf("ErrAlreadyExists")
|
||||
}
|
||||
|
||||
db.keys[key] = "stream"
|
||||
s := newStreamKey()
|
||||
db.streamKeys[key] = s
|
||||
db.keyVersion[key]++
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// return existing stream, or nil.
|
||||
func (db *RedisDB) stream(key string) (*streamKey, error) {
|
||||
if db.exists(key) && db.t(key) != "stream" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
|
||||
return db.streamKeys[key], nil
|
||||
}
|
||||
|
||||
// return existing stream group, or nil.
|
||||
func (db *RedisDB) streamGroup(key, group string) (*streamGroup, error) {
|
||||
s, err := db.stream(key)
|
||||
if err != nil || s == nil {
|
||||
return nil, err
|
||||
}
|
||||
return s.groups[group], nil
|
||||
}
|
||||
|
||||
// fastForward proceeds the current timestamp with duration, works as a time machine
|
||||
func (db *RedisDB) fastForward(duration time.Duration) {
|
||||
for _, key := range db.allKeys() {
|
||||
if value, ok := db.ttl[key]; ok {
|
||||
db.ttl[key] = value - duration
|
||||
db.checkTTL(key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (db *RedisDB) checkTTL(key string) {
|
||||
if v, ok := db.ttl[key]; ok && v <= 0 {
|
||||
db.del(key, true)
|
||||
}
|
||||
}
|
||||
|
||||
// hllAdd adds members to a hll. Returns 1 if at least 1 if internal HyperLogLog was altered, otherwise 0
|
||||
func (db *RedisDB) hllAdd(k string, elems ...string) int {
|
||||
s, ok := db.hllKeys[k]
|
||||
if !ok {
|
||||
s = newHll()
|
||||
db.keys[k] = "hll"
|
||||
}
|
||||
hllAltered := 0
|
||||
for _, e := range elems {
|
||||
if s.Add([]byte(e)) {
|
||||
hllAltered = 1
|
||||
}
|
||||
}
|
||||
db.hllKeys[k] = s
|
||||
db.keyVersion[k]++
|
||||
return hllAltered
|
||||
}
|
||||
|
||||
// hllCount estimates the amount of members added to hll by hllAdd. If called with several arguments, hllCount returns a sum of estimations
|
||||
func (db *RedisDB) hllCount(keys []string) (int, error) {
|
||||
countOverall := 0
|
||||
for _, key := range keys {
|
||||
if db.exists(key) && db.t(key) != "hll" {
|
||||
return 0, ErrNotValidHllValue
|
||||
}
|
||||
if !db.exists(key) {
|
||||
continue
|
||||
}
|
||||
countOverall += db.hllKeys[key].Count()
|
||||
}
|
||||
|
||||
return countOverall, nil
|
||||
}
|
||||
|
||||
// hllMerge merges all the hlls provided as keys to the first key. Creates a new hll in the first key if it contains nothing
|
||||
func (db *RedisDB) hllMerge(keys []string) error {
|
||||
for _, key := range keys {
|
||||
if db.exists(key) && db.t(key) != "hll" {
|
||||
return ErrNotValidHllValue
|
||||
}
|
||||
}
|
||||
|
||||
destKey := keys[0]
|
||||
restKeys := keys[1:]
|
||||
|
||||
var destHll *hll
|
||||
if db.exists(destKey) {
|
||||
destHll = db.hllKeys[destKey]
|
||||
} else {
|
||||
destHll = newHll()
|
||||
}
|
||||
|
||||
for _, key := range restKeys {
|
||||
if !db.exists(key) {
|
||||
continue
|
||||
}
|
||||
destHll.Merge(db.hllKeys[key])
|
||||
}
|
||||
|
||||
db.hllKeys[destKey] = destHll
|
||||
db.keys[destKey] = "hll"
|
||||
db.keyVersion[destKey]++
|
||||
|
||||
return nil
|
||||
}
|
||||
803
vendor/github.com/alicebob/miniredis/v2/direct.go
generated
vendored
803
vendor/github.com/alicebob/miniredis/v2/direct.go
generated
vendored
@@ -1,803 +0,0 @@
|
||||
package miniredis
|
||||
|
||||
// Commands to modify and query our databases directly.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"math/big"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrKeyNotFound is returned when a key doesn't exist.
|
||||
ErrKeyNotFound = errors.New(msgKeyNotFound)
|
||||
|
||||
// ErrWrongType when a key is not the right type.
|
||||
ErrWrongType = errors.New(msgWrongType)
|
||||
|
||||
// ErrNotValidHllValue when a key is not a valid HyperLogLog string value.
|
||||
ErrNotValidHllValue = errors.New(msgNotValidHllValue)
|
||||
|
||||
// ErrIntValueError can returned by INCRBY
|
||||
ErrIntValueError = errors.New(msgInvalidInt)
|
||||
|
||||
// ErrFloatValueError can returned by INCRBYFLOAT
|
||||
ErrFloatValueError = errors.New(msgInvalidFloat)
|
||||
)
|
||||
|
||||
// Select sets the DB id for all direct commands.
|
||||
func (m *Miniredis) Select(i int) {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
m.selectedDB = i
|
||||
}
|
||||
|
||||
// Keys returns all keys from the selected database, sorted.
|
||||
func (m *Miniredis) Keys() []string {
|
||||
return m.DB(m.selectedDB).Keys()
|
||||
}
|
||||
|
||||
// Keys returns all keys, sorted.
|
||||
func (db *RedisDB) Keys() []string {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
return db.allKeys()
|
||||
}
|
||||
|
||||
// FlushAll removes all keys from all databases.
|
||||
func (m *Miniredis) FlushAll() {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
defer m.signal.Broadcast()
|
||||
|
||||
m.flushAll()
|
||||
}
|
||||
|
||||
func (m *Miniredis) flushAll() {
|
||||
for _, db := range m.dbs {
|
||||
db.flush()
|
||||
}
|
||||
}
|
||||
|
||||
// FlushDB removes all keys from the selected database.
|
||||
func (m *Miniredis) FlushDB() {
|
||||
m.DB(m.selectedDB).FlushDB()
|
||||
}
|
||||
|
||||
// FlushDB removes all keys.
|
||||
func (db *RedisDB) FlushDB() {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
db.flush()
|
||||
}
|
||||
|
||||
// Get returns string keys added with SET.
|
||||
func (m *Miniredis) Get(k string) (string, error) {
|
||||
return m.DB(m.selectedDB).Get(k)
|
||||
}
|
||||
|
||||
// Get returns a string key.
|
||||
func (db *RedisDB) Get(k string) (string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(k) {
|
||||
return "", ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "string" {
|
||||
return "", ErrWrongType
|
||||
}
|
||||
return db.stringGet(k), nil
|
||||
}
|
||||
|
||||
// Set sets a string key. Removes expire.
|
||||
func (m *Miniredis) Set(k, v string) error {
|
||||
return m.DB(m.selectedDB).Set(k, v)
|
||||
}
|
||||
|
||||
// Set sets a string key. Removes expire.
|
||||
// Unlike redis the key can't be an existing non-string key.
|
||||
func (db *RedisDB) Set(k, v string) error {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if db.exists(k) && db.t(k) != "string" {
|
||||
return ErrWrongType
|
||||
}
|
||||
db.del(k, true) // Remove expire
|
||||
db.stringSet(k, v)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Incr changes a int string value by delta.
|
||||
func (m *Miniredis) Incr(k string, delta int) (int, error) {
|
||||
return m.DB(m.selectedDB).Incr(k, delta)
|
||||
}
|
||||
|
||||
// Incr changes a int string value by delta.
|
||||
func (db *RedisDB) Incr(k string, delta int) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if db.exists(k) && db.t(k) != "string" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
|
||||
return db.stringIncr(k, delta)
|
||||
}
|
||||
|
||||
// IncrByFloat increments the float value of a key by the given delta.
|
||||
// is an alias for Miniredis.Incrfloat
|
||||
func (m *Miniredis) IncrByFloat(k string, delta float64) (float64, error) {
|
||||
return m.Incrfloat(k, delta)
|
||||
}
|
||||
|
||||
// Incrfloat changes a float string value by delta.
|
||||
func (m *Miniredis) Incrfloat(k string, delta float64) (float64, error) {
|
||||
return m.DB(m.selectedDB).Incrfloat(k, delta)
|
||||
}
|
||||
|
||||
// Incrfloat changes a float string value by delta.
|
||||
func (db *RedisDB) Incrfloat(k string, delta float64) (float64, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if db.exists(k) && db.t(k) != "string" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
|
||||
v, err := db.stringIncrfloat(k, big.NewFloat(delta))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
vf, _ := v.Float64()
|
||||
return vf, nil
|
||||
}
|
||||
|
||||
// List returns the list k, or an error if it's not there or something else.
|
||||
// This is the same as the Redis command `LRANGE 0 -1`, but you can do your own
|
||||
// range-ing.
|
||||
func (m *Miniredis) List(k string) ([]string, error) {
|
||||
return m.DB(m.selectedDB).List(k)
|
||||
}
|
||||
|
||||
// List returns the list k, or an error if it's not there or something else.
|
||||
// This is the same as the Redis command `LRANGE 0 -1`, but you can do your own
|
||||
// range-ing.
|
||||
func (db *RedisDB) List(k string) ([]string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(k) {
|
||||
return nil, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "list" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
return db.listKeys[k], nil
|
||||
}
|
||||
|
||||
// Lpush prepends one value to a list. Returns the new length.
|
||||
func (m *Miniredis) Lpush(k, v string) (int, error) {
|
||||
return m.DB(m.selectedDB).Lpush(k, v)
|
||||
}
|
||||
|
||||
// Lpush prepends one value to a list. Returns the new length.
|
||||
func (db *RedisDB) Lpush(k, v string) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if db.exists(k) && db.t(k) != "list" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
return db.listLpush(k, v), nil
|
||||
}
|
||||
|
||||
// Lpop removes and returns the last element in a list.
|
||||
func (m *Miniredis) Lpop(k string) (string, error) {
|
||||
return m.DB(m.selectedDB).Lpop(k)
|
||||
}
|
||||
|
||||
// Lpop removes and returns the last element in a list.
|
||||
func (db *RedisDB) Lpop(k string) (string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if !db.exists(k) {
|
||||
return "", ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "list" {
|
||||
return "", ErrWrongType
|
||||
}
|
||||
return db.listLpop(k), nil
|
||||
}
|
||||
|
||||
// RPush appends one or multiple values to a list. Returns the new length.
|
||||
// An alias for Push
|
||||
func (m *Miniredis) RPush(k string, v ...string) (int, error) {
|
||||
return m.Push(k, v...)
|
||||
}
|
||||
|
||||
// Push add element at the end. Returns the new length.
|
||||
func (m *Miniredis) Push(k string, v ...string) (int, error) {
|
||||
return m.DB(m.selectedDB).Push(k, v...)
|
||||
}
|
||||
|
||||
// Push add element at the end. Is called RPUSH in redis. Returns the new length.
|
||||
func (db *RedisDB) Push(k string, v ...string) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if db.exists(k) && db.t(k) != "list" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
return db.listPush(k, v...), nil
|
||||
}
|
||||
|
||||
// RPop is an alias for Pop
|
||||
func (m *Miniredis) RPop(k string) (string, error) {
|
||||
return m.Pop(k)
|
||||
}
|
||||
|
||||
// Pop removes and returns the last element. Is called RPOP in Redis.
|
||||
func (m *Miniredis) Pop(k string) (string, error) {
|
||||
return m.DB(m.selectedDB).Pop(k)
|
||||
}
|
||||
|
||||
// Pop removes and returns the last element. Is called RPOP in Redis.
|
||||
func (db *RedisDB) Pop(k string) (string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if !db.exists(k) {
|
||||
return "", ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "list" {
|
||||
return "", ErrWrongType
|
||||
}
|
||||
|
||||
return db.listPop(k), nil
|
||||
}
|
||||
|
||||
// SAdd adds keys to a set. Returns the number of new keys.
|
||||
// Alias for SetAdd
|
||||
func (m *Miniredis) SAdd(k string, elems ...string) (int, error) {
|
||||
return m.SetAdd(k, elems...)
|
||||
}
|
||||
|
||||
// SetAdd adds keys to a set. Returns the number of new keys.
|
||||
func (m *Miniredis) SetAdd(k string, elems ...string) (int, error) {
|
||||
return m.DB(m.selectedDB).SetAdd(k, elems...)
|
||||
}
|
||||
|
||||
// SetAdd adds keys to a set. Returns the number of new keys.
|
||||
func (db *RedisDB) SetAdd(k string, elems ...string) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if db.exists(k) && db.t(k) != "set" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
return db.setAdd(k, elems...), nil
|
||||
}
|
||||
|
||||
// SMembers returns all keys in a set, sorted.
|
||||
// Alias for Members.
|
||||
func (m *Miniredis) SMembers(k string) ([]string, error) {
|
||||
return m.Members(k)
|
||||
}
|
||||
|
||||
// Members returns all keys in a set, sorted.
|
||||
func (m *Miniredis) Members(k string) ([]string, error) {
|
||||
return m.DB(m.selectedDB).Members(k)
|
||||
}
|
||||
|
||||
// Members gives all set keys. Sorted.
|
||||
func (db *RedisDB) Members(k string) ([]string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(k) {
|
||||
return nil, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "set" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
return db.setMembers(k), nil
|
||||
}
|
||||
|
||||
// SIsMember tells if value is in the set.
|
||||
// Alias for IsMember
|
||||
func (m *Miniredis) SIsMember(k, v string) (bool, error) {
|
||||
return m.IsMember(k, v)
|
||||
}
|
||||
|
||||
// IsMember tells if value is in the set.
|
||||
func (m *Miniredis) IsMember(k, v string) (bool, error) {
|
||||
return m.DB(m.selectedDB).IsMember(k, v)
|
||||
}
|
||||
|
||||
// IsMember tells if value is in the set.
|
||||
func (db *RedisDB) IsMember(k, v string) (bool, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(k) {
|
||||
return false, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "set" {
|
||||
return false, ErrWrongType
|
||||
}
|
||||
return db.setIsMember(k, v), nil
|
||||
}
|
||||
|
||||
// HKeys returns all (sorted) keys ('fields') for a hash key.
|
||||
func (m *Miniredis) HKeys(k string) ([]string, error) {
|
||||
return m.DB(m.selectedDB).HKeys(k)
|
||||
}
|
||||
|
||||
// HKeys returns all (sorted) keys ('fields') for a hash key.
|
||||
func (db *RedisDB) HKeys(key string) ([]string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(key) {
|
||||
return nil, ErrKeyNotFound
|
||||
}
|
||||
if db.t(key) != "hash" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
return db.hashFields(key), nil
|
||||
}
|
||||
|
||||
// Del deletes a key and any expiration value. Returns whether there was a key.
|
||||
func (m *Miniredis) Del(k string) bool {
|
||||
return m.DB(m.selectedDB).Del(k)
|
||||
}
|
||||
|
||||
// Del deletes a key and any expiration value. Returns whether there was a key.
|
||||
func (db *RedisDB) Del(k string) bool {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if !db.exists(k) {
|
||||
return false
|
||||
}
|
||||
db.del(k, true)
|
||||
return true
|
||||
}
|
||||
|
||||
// Unlink deletes a key and any expiration value. Returns where there was a key.
|
||||
// It's exactly the same as Del() and is not async. It is here for the consistency.
|
||||
func (m *Miniredis) Unlink(k string) bool {
|
||||
return m.Del(k)
|
||||
}
|
||||
|
||||
// Unlink deletes a key and any expiration value. Returns where there was a key.
|
||||
// It's exactly the same as Del() and is not async. It is here for the consistency.
|
||||
func (db *RedisDB) Unlink(k string) bool {
|
||||
return db.Del(k)
|
||||
}
|
||||
|
||||
// TTL is the left over time to live. As set via EXPIRE, PEXPIRE, EXPIREAT,
|
||||
// PEXPIREAT.
|
||||
// Note: this direct function returns 0 if there is no TTL set, unlike redis,
|
||||
// which returns -1.
|
||||
func (m *Miniredis) TTL(k string) time.Duration {
|
||||
return m.DB(m.selectedDB).TTL(k)
|
||||
}
|
||||
|
||||
// TTL is the left over time to live. As set via EXPIRE, PEXPIRE, EXPIREAT,
|
||||
// PEXPIREAT.
|
||||
// 0 if not set.
|
||||
func (db *RedisDB) TTL(k string) time.Duration {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
return db.ttl[k]
|
||||
}
|
||||
|
||||
// SetTTL sets the TTL of a key.
|
||||
func (m *Miniredis) SetTTL(k string, ttl time.Duration) {
|
||||
m.DB(m.selectedDB).SetTTL(k, ttl)
|
||||
}
|
||||
|
||||
// SetTTL sets the time to live of a key.
|
||||
func (db *RedisDB) SetTTL(k string, ttl time.Duration) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
db.ttl[k] = ttl
|
||||
db.keyVersion[k]++
|
||||
}
|
||||
|
||||
// Type gives the type of a key, or ""
|
||||
func (m *Miniredis) Type(k string) string {
|
||||
return m.DB(m.selectedDB).Type(k)
|
||||
}
|
||||
|
||||
// Type gives the type of a key, or ""
|
||||
func (db *RedisDB) Type(k string) string {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
return db.t(k)
|
||||
}
|
||||
|
||||
// Exists tells whether a key exists.
|
||||
func (m *Miniredis) Exists(k string) bool {
|
||||
return m.DB(m.selectedDB).Exists(k)
|
||||
}
|
||||
|
||||
// Exists tells whether a key exists.
|
||||
func (db *RedisDB) Exists(k string) bool {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
return db.exists(k)
|
||||
}
|
||||
|
||||
// HGet returns hash keys added with HSET.
|
||||
// This will return an empty string if the key is not set. Redis would return
|
||||
// a nil.
|
||||
// Returns empty string when the key is of a different type.
|
||||
func (m *Miniredis) HGet(k, f string) string {
|
||||
return m.DB(m.selectedDB).HGet(k, f)
|
||||
}
|
||||
|
||||
// HGet returns hash keys added with HSET.
|
||||
// Returns empty string when the key is of a different type.
|
||||
func (db *RedisDB) HGet(k, f string) string {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
h, ok := db.hashKeys[k]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return h[f]
|
||||
}
|
||||
|
||||
// HSet sets hash keys.
|
||||
// If there is another key by the same name it will be gone.
|
||||
func (m *Miniredis) HSet(k string, fv ...string) {
|
||||
m.DB(m.selectedDB).HSet(k, fv...)
|
||||
}
|
||||
|
||||
// HSet sets hash keys.
|
||||
// If there is another key by the same name it will be gone.
|
||||
func (db *RedisDB) HSet(k string, fv ...string) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
db.hashSet(k, fv...)
|
||||
}
|
||||
|
||||
// HDel deletes a hash key.
|
||||
func (m *Miniredis) HDel(k, f string) {
|
||||
m.DB(m.selectedDB).HDel(k, f)
|
||||
}
|
||||
|
||||
// HDel deletes a hash key.
|
||||
func (db *RedisDB) HDel(k, f string) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
db.hdel(k, f)
|
||||
}
|
||||
|
||||
func (db *RedisDB) hdel(k, f string) {
|
||||
if _, ok := db.hashKeys[k]; !ok {
|
||||
return
|
||||
}
|
||||
delete(db.hashKeys[k], f)
|
||||
db.keyVersion[k]++
|
||||
}
|
||||
|
||||
// HIncrBy increases the integer value of a hash field by delta (int).
|
||||
func (m *Miniredis) HIncrBy(k, f string, delta int) (int, error) {
|
||||
return m.HIncr(k, f, delta)
|
||||
}
|
||||
|
||||
// HIncr increases a key/field by delta (int).
|
||||
func (m *Miniredis) HIncr(k, f string, delta int) (int, error) {
|
||||
return m.DB(m.selectedDB).HIncr(k, f, delta)
|
||||
}
|
||||
|
||||
// HIncr increases a key/field by delta (int).
|
||||
func (db *RedisDB) HIncr(k, f string, delta int) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
return db.hashIncr(k, f, delta)
|
||||
}
|
||||
|
||||
// HIncrByFloat increases a key/field by delta (float).
|
||||
func (m *Miniredis) HIncrByFloat(k, f string, delta float64) (float64, error) {
|
||||
return m.HIncrfloat(k, f, delta)
|
||||
}
|
||||
|
||||
// HIncrfloat increases a key/field by delta (float).
|
||||
func (m *Miniredis) HIncrfloat(k, f string, delta float64) (float64, error) {
|
||||
return m.DB(m.selectedDB).HIncrfloat(k, f, delta)
|
||||
}
|
||||
|
||||
// HIncrfloat increases a key/field by delta (float).
|
||||
func (db *RedisDB) HIncrfloat(k, f string, delta float64) (float64, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
v, err := db.hashIncrfloat(k, f, big.NewFloat(delta))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
vf, _ := v.Float64()
|
||||
return vf, nil
|
||||
}
|
||||
|
||||
// SRem removes fields from a set. Returns number of deleted fields.
|
||||
func (m *Miniredis) SRem(k string, fields ...string) (int, error) {
|
||||
return m.DB(m.selectedDB).SRem(k, fields...)
|
||||
}
|
||||
|
||||
// SRem removes fields from a set. Returns number of deleted fields.
|
||||
func (db *RedisDB) SRem(k string, fields ...string) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if !db.exists(k) {
|
||||
return 0, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "set" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
return db.setRem(k, fields...), nil
|
||||
}
|
||||
|
||||
// ZAdd adds a score,member to a sorted set.
|
||||
func (m *Miniredis) ZAdd(k string, score float64, member string) (bool, error) {
|
||||
return m.DB(m.selectedDB).ZAdd(k, score, member)
|
||||
}
|
||||
|
||||
// ZAdd adds a score,member to a sorted set.
|
||||
func (db *RedisDB) ZAdd(k string, score float64, member string) (bool, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if db.exists(k) && db.t(k) != "zset" {
|
||||
return false, ErrWrongType
|
||||
}
|
||||
return db.ssetAdd(k, score, member), nil
|
||||
}
|
||||
|
||||
// ZMembers returns all members of a sorted set by score
|
||||
func (m *Miniredis) ZMembers(k string) ([]string, error) {
|
||||
return m.DB(m.selectedDB).ZMembers(k)
|
||||
}
|
||||
|
||||
// ZMembers returns all members of a sorted set by score
|
||||
func (db *RedisDB) ZMembers(k string) ([]string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(k) {
|
||||
return nil, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "zset" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
return db.ssetMembers(k), nil
|
||||
}
|
||||
|
||||
// SortedSet returns a raw string->float64 map.
|
||||
func (m *Miniredis) SortedSet(k string) (map[string]float64, error) {
|
||||
return m.DB(m.selectedDB).SortedSet(k)
|
||||
}
|
||||
|
||||
// SortedSet returns a raw string->float64 map.
|
||||
func (db *RedisDB) SortedSet(k string) (map[string]float64, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(k) {
|
||||
return nil, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "zset" {
|
||||
return nil, ErrWrongType
|
||||
}
|
||||
return db.sortedSet(k), nil
|
||||
}
|
||||
|
||||
// ZRem deletes a member. Returns whether the was a key.
|
||||
func (m *Miniredis) ZRem(k, member string) (bool, error) {
|
||||
return m.DB(m.selectedDB).ZRem(k, member)
|
||||
}
|
||||
|
||||
// ZRem deletes a member. Returns whether the was a key.
|
||||
func (db *RedisDB) ZRem(k, member string) (bool, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
if !db.exists(k) {
|
||||
return false, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "zset" {
|
||||
return false, ErrWrongType
|
||||
}
|
||||
return db.ssetRem(k, member), nil
|
||||
}
|
||||
|
||||
// ZScore gives the score of a sorted set member.
|
||||
func (m *Miniredis) ZScore(k, member string) (float64, error) {
|
||||
return m.DB(m.selectedDB).ZScore(k, member)
|
||||
}
|
||||
|
||||
// ZScore gives the score of a sorted set member.
|
||||
func (db *RedisDB) ZScore(k, member string) (float64, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if !db.exists(k) {
|
||||
return 0, ErrKeyNotFound
|
||||
}
|
||||
if db.t(k) != "zset" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
return db.ssetScore(k, member), nil
|
||||
}
|
||||
|
||||
// XAdd adds an entry to a stream. `id` can be left empty or be '*'.
|
||||
// If a value is given normal XADD rules apply. Values should be an even
|
||||
// length.
|
||||
func (m *Miniredis) XAdd(k string, id string, values []string) (string, error) {
|
||||
return m.DB(m.selectedDB).XAdd(k, id, values)
|
||||
}
|
||||
|
||||
// XAdd adds an entry to a stream. `id` can be left empty or be '*'.
|
||||
// If a value is given normal XADD rules apply. Values should be an even
|
||||
// length.
|
||||
func (db *RedisDB) XAdd(k string, id string, values []string) (string, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
defer db.master.signal.Broadcast()
|
||||
|
||||
s, err := db.stream(k)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if s == nil {
|
||||
s, _ = db.newStream(k)
|
||||
}
|
||||
|
||||
return s.add(id, values, db.master.effectiveNow())
|
||||
}
|
||||
|
||||
// Stream returns a slice of stream entries. Oldest first.
|
||||
func (m *Miniredis) Stream(k string) ([]StreamEntry, error) {
|
||||
return m.DB(m.selectedDB).Stream(k)
|
||||
}
|
||||
|
||||
// Stream returns a slice of stream entries. Oldest first.
|
||||
func (db *RedisDB) Stream(key string) ([]StreamEntry, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
s, err := db.stream(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if s == nil {
|
||||
return nil, nil
|
||||
}
|
||||
return s.entries, nil
|
||||
}
|
||||
|
||||
// Publish a message to subscribers. Returns the number of receivers.
|
||||
func (m *Miniredis) Publish(channel, message string) int {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
return m.publish(channel, message)
|
||||
}
|
||||
|
||||
// PubSubChannels is "PUBSUB CHANNELS <pattern>". An empty pattern is fine
|
||||
// (meaning all channels).
|
||||
// Returned channels will be ordered alphabetically.
|
||||
func (m *Miniredis) PubSubChannels(pattern string) []string {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
return activeChannels(m.allSubscribers(), pattern)
|
||||
}
|
||||
|
||||
// PubSubNumSub is "PUBSUB NUMSUB [channels]". It returns all channels with their
|
||||
// subscriber count.
|
||||
func (m *Miniredis) PubSubNumSub(channels ...string) map[string]int {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
subs := m.allSubscribers()
|
||||
res := map[string]int{}
|
||||
for _, channel := range channels {
|
||||
res[channel] = countSubs(subs, channel)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// PubSubNumPat is "PUBSUB NUMPAT"
|
||||
func (m *Miniredis) PubSubNumPat() int {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
return countPsubs(m.allSubscribers())
|
||||
}
|
||||
|
||||
// PfAdd adds keys to a hll. Returns the flag which equals to 1 if the inner hll value has been changed.
|
||||
func (m *Miniredis) PfAdd(k string, elems ...string) (int, error) {
|
||||
return m.DB(m.selectedDB).HllAdd(k, elems...)
|
||||
}
|
||||
|
||||
// HllAdd adds keys to a hll. Returns the flag which equals to true if the inner hll value has been changed.
|
||||
func (db *RedisDB) HllAdd(k string, elems ...string) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
if db.exists(k) && db.t(k) != "hll" {
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
return db.hllAdd(k, elems...), nil
|
||||
}
|
||||
|
||||
// PfCount returns an estimation of the amount of elements previously added to a hll.
|
||||
func (m *Miniredis) PfCount(keys ...string) (int, error) {
|
||||
return m.DB(m.selectedDB).HllCount(keys...)
|
||||
}
|
||||
|
||||
// HllCount returns an estimation of the amount of elements previously added to a hll.
|
||||
func (db *RedisDB) HllCount(keys ...string) (int, error) {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
return db.hllCount(keys)
|
||||
}
|
||||
|
||||
// PfMerge merges all the input hlls into a hll under destKey key.
|
||||
func (m *Miniredis) PfMerge(destKey string, sourceKeys ...string) error {
|
||||
return m.DB(m.selectedDB).HllMerge(destKey, sourceKeys...)
|
||||
}
|
||||
|
||||
// HllMerge merges all the input hlls into a hll under destKey key.
|
||||
func (db *RedisDB) HllMerge(destKey string, sourceKeys ...string) error {
|
||||
db.master.Lock()
|
||||
defer db.master.Unlock()
|
||||
|
||||
return db.hllMerge(append([]string{destKey}, sourceKeys...))
|
||||
}
|
||||
|
||||
// Copy a value.
|
||||
// Needs the IDs of both the source and dest DBs (which can differ).
|
||||
// Returns ErrKeyNotFound if src does not exist.
|
||||
// Overwrites dest if it already exists (unlike the redis command, which needs a flag to allow that).
|
||||
func (m *Miniredis) Copy(srcDB int, src string, destDB int, dest string) error {
|
||||
return m.copy(m.DB(srcDB), src, m.DB(destDB), dest)
|
||||
}
|
||||
46
vendor/github.com/alicebob/miniredis/v2/geo.go
generated
vendored
46
vendor/github.com/alicebob/miniredis/v2/geo.go
generated
vendored
@@ -1,46 +0,0 @@
|
||||
package miniredis
|
||||
|
||||
import (
|
||||
"math"
|
||||
|
||||
"github.com/alicebob/miniredis/v2/geohash"
|
||||
)
|
||||
|
||||
func toGeohash(long, lat float64) uint64 {
|
||||
return geohash.EncodeIntWithPrecision(lat, long, 52)
|
||||
}
|
||||
|
||||
func fromGeohash(score uint64) (float64, float64) {
|
||||
lat, long := geohash.DecodeIntWithPrecision(score, 52)
|
||||
return long, lat
|
||||
}
|
||||
|
||||
// haversin(θ) function
|
||||
func hsin(theta float64) float64 {
|
||||
return math.Pow(math.Sin(theta/2), 2)
|
||||
}
|
||||
|
||||
// distance function returns the distance (in meters) between two points of
|
||||
// a given longitude and latitude relatively accurately (using a spherical
|
||||
// approximation of the Earth) through the Haversin Distance Formula for
|
||||
// great arc distance on a sphere with accuracy for small distances
|
||||
// point coordinates are supplied in degrees and converted into rad. in the func
|
||||
// distance returned is meters
|
||||
// http://en.wikipedia.org/wiki/Haversine_formula
|
||||
// Source: https://gist.github.com/cdipaolo/d3f8db3848278b49db68
|
||||
func distance(lat1, lon1, lat2, lon2 float64) float64 {
|
||||
// convert to radians
|
||||
// must cast radius as float to multiply later
|
||||
var la1, lo1, la2, lo2 float64
|
||||
la1 = lat1 * math.Pi / 180
|
||||
lo1 = lon1 * math.Pi / 180
|
||||
la2 = lat2 * math.Pi / 180
|
||||
lo2 = lon2 * math.Pi / 180
|
||||
|
||||
earth := 6372797.560856 // Earth radius in METERS, according to src/geohash_helper.c
|
||||
|
||||
// calculate
|
||||
h := hsin(la2-la1) + math.Cos(la1)*math.Cos(la2)*hsin(lo2-lo1)
|
||||
|
||||
return 2 * earth * math.Asin(math.Sqrt(h))
|
||||
}
|
||||
22
vendor/github.com/alicebob/miniredis/v2/geohash/LICENSE
generated
vendored
22
vendor/github.com/alicebob/miniredis/v2/geohash/LICENSE
generated
vendored
@@ -1,22 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Michael McLoughlin
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user